Mar 18 09:51:55.715967 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 09:51:56.669437 master-0 kubenswrapper[3991]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:51:56.669437 master-0 kubenswrapper[3991]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 09:51:56.669437 master-0 kubenswrapper[3991]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:51:56.669437 master-0 kubenswrapper[3991]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:51:56.669437 master-0 kubenswrapper[3991]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 09:51:56.669437 master-0 kubenswrapper[3991]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:51:56.672237 master-0 kubenswrapper[3991]: I0318 09:51:56.672048 3991 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 09:51:56.686426 master-0 kubenswrapper[3991]: W0318 09:51:56.686358 3991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:51:56.686426 master-0 kubenswrapper[3991]: W0318 09:51:56.686396 3991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:51:56.686426 master-0 kubenswrapper[3991]: W0318 09:51:56.686405 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:51:56.686426 master-0 kubenswrapper[3991]: W0318 09:51:56.686415 3991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:51:56.686426 master-0 kubenswrapper[3991]: W0318 09:51:56.686423 3991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:51:56.686426 master-0 kubenswrapper[3991]: W0318 09:51:56.686432 3991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:51:56.686426 master-0 kubenswrapper[3991]: W0318 09:51:56.686440 3991 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:51:56.686426 master-0 kubenswrapper[3991]: W0318 09:51:56.686449 3991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686457 3991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686466 3991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686474 3991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686482 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686490 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686497 3991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686505 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686514 3991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686522 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686530 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686538 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686546 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686555 3991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686565 3991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686575 3991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686584 3991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686593 3991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686601 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:51:56.686797 master-0 kubenswrapper[3991]: W0318 09:51:56.686609 3991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686618 3991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686626 3991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686635 3991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686643 3991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686678 3991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686687 3991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686695 3991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686704 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686712 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686720 3991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686728 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686736 3991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686743 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686751 3991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686759 3991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686769 3991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686779 3991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686787 3991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686797 3991 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:51:56.687547 master-0 kubenswrapper[3991]: W0318 09:51:56.686806 3991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686813 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686848 3991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686856 3991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686864 3991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686872 3991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686881 3991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686888 3991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686897 3991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686905 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686915 3991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686924 3991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686932 3991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686940 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686947 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686958 3991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686966 3991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686975 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686982 3991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686990 3991 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:51:56.688139 master-0 kubenswrapper[3991]: W0318 09:51:56.686998 3991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: W0318 09:51:56.687010 3991 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: W0318 09:51:56.687020 3991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: W0318 09:51:56.687028 3991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: W0318 09:51:56.687036 3991 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: W0318 09:51:56.687044 3991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688089 3991 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688114 3991 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688128 3991 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688139 3991 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688151 3991 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688160 3991 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688172 3991 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688183 3991 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688193 3991 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688201 3991 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688214 3991 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688224 3991 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688234 3991 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688243 3991 flags.go:64] FLAG: --cgroup-root="" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688251 3991 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688261 3991 flags.go:64] FLAG: --client-ca-file="" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688270 3991 flags.go:64] FLAG: --cloud-config="" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688279 3991 flags.go:64] FLAG: --cloud-provider="" Mar 18 09:51:56.688691 master-0 kubenswrapper[3991]: I0318 09:51:56.688288 3991 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688299 3991 flags.go:64] FLAG: --cluster-domain="" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688308 3991 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688317 3991 flags.go:64] FLAG: --config-dir="" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688326 3991 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688336 3991 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688347 3991 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688356 3991 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688365 3991 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688375 3991 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688385 3991 flags.go:64] FLAG: --contention-profiling="false" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688394 3991 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688403 3991 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688413 3991 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688422 3991 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688434 3991 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688443 3991 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688452 3991 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688460 3991 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688469 3991 flags.go:64] FLAG: --enable-server="true" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688478 3991 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688489 3991 flags.go:64] FLAG: --event-burst="100" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688498 3991 flags.go:64] FLAG: --event-qps="50" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688508 3991 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688517 3991 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 09:51:56.689370 master-0 kubenswrapper[3991]: I0318 09:51:56.688526 3991 flags.go:64] FLAG: --eviction-hard="" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688537 3991 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688546 3991 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688555 3991 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688565 3991 flags.go:64] FLAG: --eviction-soft="" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688574 3991 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688583 3991 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688591 3991 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688600 3991 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688609 3991 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688618 3991 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688627 3991 flags.go:64] FLAG: --feature-gates="" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688637 3991 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688646 3991 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688655 3991 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688665 3991 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688674 3991 flags.go:64] FLAG: --healthz-port="10248" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688683 3991 flags.go:64] FLAG: --help="false" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688692 3991 flags.go:64] FLAG: --hostname-override="" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688701 3991 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688711 3991 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688720 3991 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688729 3991 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688738 3991 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688746 3991 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 09:51:56.690233 master-0 kubenswrapper[3991]: I0318 09:51:56.688755 3991 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688765 3991 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688774 3991 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688783 3991 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688792 3991 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688801 3991 flags.go:64] FLAG: --kube-reserved="" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688810 3991 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688820 3991 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688856 3991 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688865 3991 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688874 3991 flags.go:64] FLAG: --lock-file="" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688883 3991 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688892 3991 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688901 3991 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688927 3991 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688936 3991 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688945 3991 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688954 3991 flags.go:64] FLAG: --logging-format="text" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688963 3991 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688973 3991 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688981 3991 flags.go:64] FLAG: --manifest-url="" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.688990 3991 flags.go:64] FLAG: --manifest-url-header="" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.689002 3991 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.689011 3991 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.689022 3991 flags.go:64] FLAG: --max-pods="110" Mar 18 09:51:56.690982 master-0 kubenswrapper[3991]: I0318 09:51:56.689031 3991 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689040 3991 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689049 3991 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689058 3991 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689067 3991 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689077 3991 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689086 3991 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689107 3991 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689115 3991 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689124 3991 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689133 3991 flags.go:64] FLAG: --pod-cidr="" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689142 3991 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689155 3991 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689163 3991 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689172 3991 flags.go:64] FLAG: --pods-per-core="0" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689182 3991 flags.go:64] FLAG: --port="10250" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689191 3991 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689200 3991 flags.go:64] FLAG: --provider-id="" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689208 3991 flags.go:64] FLAG: --qos-reserved="" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689217 3991 flags.go:64] FLAG: --read-only-port="10255" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689226 3991 flags.go:64] FLAG: --register-node="true" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689240 3991 flags.go:64] FLAG: --register-schedulable="true" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689249 3991 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 09:51:56.691666 master-0 kubenswrapper[3991]: I0318 09:51:56.689264 3991 flags.go:64] FLAG: --registry-burst="10" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689273 3991 flags.go:64] FLAG: --registry-qps="5" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689285 3991 flags.go:64] FLAG: --reserved-cpus="" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689294 3991 flags.go:64] FLAG: --reserved-memory="" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689305 3991 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689314 3991 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689323 3991 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689332 3991 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689340 3991 flags.go:64] FLAG: --runonce="false" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689351 3991 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689361 3991 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689370 3991 flags.go:64] FLAG: --seccomp-default="false" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689379 3991 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689388 3991 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689398 3991 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689407 3991 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689416 3991 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689425 3991 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689434 3991 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689442 3991 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689452 3991 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689461 3991 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689470 3991 flags.go:64] FLAG: --system-cgroups="" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689478 3991 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689493 3991 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 09:51:56.692336 master-0 kubenswrapper[3991]: I0318 09:51:56.689502 3991 flags.go:64] FLAG: --tls-cert-file="" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689511 3991 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689523 3991 flags.go:64] FLAG: --tls-min-version="" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689531 3991 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689540 3991 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689550 3991 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689559 3991 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689568 3991 flags.go:64] FLAG: --v="2" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689579 3991 flags.go:64] FLAG: --version="false" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689590 3991 flags.go:64] FLAG: --vmodule="" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689601 3991 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: I0318 09:51:56.689610 3991 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691027 3991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691044 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691054 3991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691063 3991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691073 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691083 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691091 3991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691100 3991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691108 3991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691117 3991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:51:56.693073 master-0 kubenswrapper[3991]: W0318 09:51:56.691125 3991 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691133 3991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691141 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691149 3991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691157 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691164 3991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691175 3991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691185 3991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691193 3991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691202 3991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691210 3991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691218 3991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691225 3991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691233 3991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691240 3991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691251 3991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691260 3991 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691268 3991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691276 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:51:56.693681 master-0 kubenswrapper[3991]: W0318 09:51:56.691284 3991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691295 3991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691306 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691315 3991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691324 3991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691332 3991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691342 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691350 3991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691360 3991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691369 3991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691377 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691386 3991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691393 3991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691401 3991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691409 3991 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691417 3991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691425 3991 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691433 3991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691440 3991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691448 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:51:56.694284 master-0 kubenswrapper[3991]: W0318 09:51:56.691456 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691464 3991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691472 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691479 3991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691487 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691494 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691502 3991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691511 3991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691518 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691526 3991 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691534 3991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691546 3991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691556 3991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691566 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691574 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691583 3991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691592 3991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691601 3991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691608 3991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:51:56.694886 master-0 kubenswrapper[3991]: W0318 09:51:56.691616 3991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:51:56.695427 master-0 kubenswrapper[3991]: W0318 09:51:56.691624 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:51:56.695427 master-0 kubenswrapper[3991]: W0318 09:51:56.691632 3991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:51:56.695427 master-0 kubenswrapper[3991]: W0318 09:51:56.691640 3991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:51:56.695427 master-0 kubenswrapper[3991]: I0318 09:51:56.691663 3991 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:51:56.705537 master-0 kubenswrapper[3991]: I0318 09:51:56.705481 3991 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 09:51:56.705537 master-0 kubenswrapper[3991]: I0318 09:51:56.705526 3991 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 09:51:56.705681 master-0 kubenswrapper[3991]: W0318 09:51:56.705648 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:51:56.705681 master-0 kubenswrapper[3991]: W0318 09:51:56.705662 3991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:51:56.705681 master-0 kubenswrapper[3991]: W0318 09:51:56.705671 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:51:56.705681 master-0 kubenswrapper[3991]: W0318 09:51:56.705680 3991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705689 3991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705697 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705705 3991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705713 3991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705721 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705729 3991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705737 3991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705745 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705752 3991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705760 3991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705768 3991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705776 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705785 3991 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705793 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705801 3991 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:51:56.705796 master-0 kubenswrapper[3991]: W0318 09:51:56.705810 3991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705883 3991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705901 3991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705911 3991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705921 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705930 3991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705941 3991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705951 3991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705961 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705972 3991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705981 3991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705990 3991 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.705998 3991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.706006 3991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.706014 3991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.706024 3991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.706032 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.706043 3991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:51:56.706312 master-0 kubenswrapper[3991]: W0318 09:51:56.706053 3991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706062 3991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706071 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706079 3991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706088 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706097 3991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706106 3991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706114 3991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706123 3991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706131 3991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706140 3991 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706148 3991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706155 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706163 3991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706171 3991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706179 3991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706189 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706197 3991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706205 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706212 3991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:51:56.706939 master-0 kubenswrapper[3991]: W0318 09:51:56.706220 3991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706227 3991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706235 3991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706243 3991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706250 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706258 3991 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706265 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706273 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706281 3991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706289 3991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706297 3991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706304 3991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706312 3991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706322 3991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: W0318 09:51:56.706331 3991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:51:56.707554 master-0 kubenswrapper[3991]: I0318 09:51:56.706343 3991 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706550 3991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706561 3991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706572 3991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706580 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706588 3991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706597 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706605 3991 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706614 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706622 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706630 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706638 3991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706646 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706655 3991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706664 3991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706672 3991 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706680 3991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706690 3991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706699 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:51:56.708204 master-0 kubenswrapper[3991]: W0318 09:51:56.706708 3991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706716 3991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706726 3991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706734 3991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706744 3991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706754 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706763 3991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706771 3991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706779 3991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706786 3991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706794 3991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706802 3991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706810 3991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706818 3991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706861 3991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706873 3991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706884 3991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706893 3991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706901 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:51:56.708761 master-0 kubenswrapper[3991]: W0318 09:51:56.706909 3991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706917 3991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706925 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706933 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706940 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706949 3991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706956 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706966 3991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706974 3991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706982 3991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706990 3991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.706998 3991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707006 3991 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707013 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707021 3991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707029 3991 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707036 3991 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707044 3991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707052 3991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707059 3991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707067 3991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:51:56.709331 master-0 kubenswrapper[3991]: W0318 09:51:56.707074 3991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707082 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707092 3991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707102 3991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707111 3991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707119 3991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707128 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707138 3991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707145 3991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707153 3991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707161 3991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707170 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707179 3991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: W0318 09:51:56.707186 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:51:56.710110 master-0 kubenswrapper[3991]: I0318 09:51:56.707198 3991 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:51:56.710516 master-0 kubenswrapper[3991]: I0318 09:51:56.707471 3991 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 09:51:56.712106 master-0 kubenswrapper[3991]: I0318 09:51:56.712064 3991 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 18 09:51:56.714298 master-0 kubenswrapper[3991]: I0318 09:51:56.714258 3991 server.go:997] "Starting client certificate rotation" Mar 18 09:51:56.714359 master-0 kubenswrapper[3991]: I0318 09:51:56.714303 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 09:51:56.714635 master-0 kubenswrapper[3991]: I0318 09:51:56.714588 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 09:51:56.779753 master-0 kubenswrapper[3991]: I0318 09:51:56.779664 3991 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 09:51:56.785508 master-0 kubenswrapper[3991]: E0318 09:51:56.785450 3991 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:56.788221 master-0 kubenswrapper[3991]: I0318 09:51:56.788181 3991 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 09:51:56.807608 master-0 kubenswrapper[3991]: I0318 09:51:56.807522 3991 log.go:25] "Validated CRI v1 runtime API" Mar 18 09:51:56.818023 master-0 kubenswrapper[3991]: I0318 09:51:56.817930 3991 log.go:25] "Validated CRI v1 image API" Mar 18 09:51:56.820880 master-0 kubenswrapper[3991]: I0318 09:51:56.820783 3991 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 09:51:56.825535 master-0 kubenswrapper[3991]: I0318 09:51:56.825455 3991 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 b6f69005-7b27-4e50-b235-73833be75bbb:/dev/vda3] Mar 18 09:51:56.825535 master-0 kubenswrapper[3991]: I0318 09:51:56.825503 3991 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 18 09:51:56.856569 master-0 kubenswrapper[3991]: I0318 09:51:56.856138 3991 manager.go:217] Machine: {Timestamp:2026-03-18 09:51:56.853610573 +0000 UTC m=+0.812550498 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:2ce24ad926944999b07b278206f0e4a4 SystemUUID:2ce24ad9-2694-4999-b07b-278206f0e4a4 BootID:b58383dd-cfef-45af-ac7b-26a609b46986 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:50:e9:f6 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:26:48:4a:2c:71:6e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 09:51:56.856569 master-0 kubenswrapper[3991]: I0318 09:51:56.856496 3991 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 09:51:56.856929 master-0 kubenswrapper[3991]: I0318 09:51:56.856815 3991 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 09:51:56.857647 master-0 kubenswrapper[3991]: I0318 09:51:56.857589 3991 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 09:51:56.858171 master-0 kubenswrapper[3991]: I0318 09:51:56.858087 3991 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 09:51:56.858557 master-0 kubenswrapper[3991]: I0318 09:51:56.858150 3991 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 09:51:56.858891 master-0 kubenswrapper[3991]: I0318 09:51:56.858589 3991 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 09:51:56.858891 master-0 kubenswrapper[3991]: I0318 09:51:56.858617 3991 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 09:51:56.858891 master-0 kubenswrapper[3991]: I0318 09:51:56.858799 3991 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 09:51:56.858891 master-0 kubenswrapper[3991]: I0318 09:51:56.858888 3991 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 09:51:56.859345 master-0 kubenswrapper[3991]: I0318 09:51:56.859292 3991 state_mem.go:36] "Initialized new in-memory state store" Mar 18 09:51:56.859532 master-0 kubenswrapper[3991]: I0318 09:51:56.859485 3991 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 09:51:56.864687 master-0 kubenswrapper[3991]: I0318 09:51:56.864636 3991 kubelet.go:418] "Attempting to sync node with API server" Mar 18 09:51:56.864687 master-0 kubenswrapper[3991]: I0318 09:51:56.864677 3991 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 09:51:56.864906 master-0 kubenswrapper[3991]: I0318 09:51:56.864728 3991 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 09:51:56.864906 master-0 kubenswrapper[3991]: I0318 09:51:56.864755 3991 kubelet.go:324] "Adding apiserver pod source" Mar 18 09:51:56.864906 master-0 kubenswrapper[3991]: I0318 09:51:56.864779 3991 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 09:51:56.871384 master-0 kubenswrapper[3991]: I0318 09:51:56.871323 3991 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 09:51:56.873239 master-0 kubenswrapper[3991]: W0318 09:51:56.873140 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:56.873239 master-0 kubenswrapper[3991]: W0318 09:51:56.873172 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:56.873432 master-0 kubenswrapper[3991]: E0318 09:51:56.873301 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:56.873432 master-0 kubenswrapper[3991]: E0318 09:51:56.873294 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:56.874594 master-0 kubenswrapper[3991]: I0318 09:51:56.874564 3991 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 09:51:56.875199 master-0 kubenswrapper[3991]: I0318 09:51:56.875163 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 09:51:56.875408 master-0 kubenswrapper[3991]: I0318 09:51:56.875382 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 09:51:56.875524 master-0 kubenswrapper[3991]: I0318 09:51:56.875505 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 09:51:56.875644 master-0 kubenswrapper[3991]: I0318 09:51:56.875625 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 09:51:56.875745 master-0 kubenswrapper[3991]: I0318 09:51:56.875726 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 09:51:56.875877 master-0 kubenswrapper[3991]: I0318 09:51:56.875858 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 09:51:56.875981 master-0 kubenswrapper[3991]: I0318 09:51:56.875963 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 09:51:56.876129 master-0 kubenswrapper[3991]: I0318 09:51:56.876109 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 09:51:56.876236 master-0 kubenswrapper[3991]: I0318 09:51:56.876218 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 09:51:56.876345 master-0 kubenswrapper[3991]: I0318 09:51:56.876326 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 09:51:56.876477 master-0 kubenswrapper[3991]: I0318 09:51:56.876458 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 09:51:56.876589 master-0 kubenswrapper[3991]: I0318 09:51:56.876571 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 09:51:56.879022 master-0 kubenswrapper[3991]: I0318 09:51:56.878991 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 09:51:56.879923 master-0 kubenswrapper[3991]: I0318 09:51:56.879897 3991 server.go:1280] "Started kubelet" Mar 18 09:51:56.880580 master-0 kubenswrapper[3991]: I0318 09:51:56.880233 3991 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 09:51:56.880580 master-0 kubenswrapper[3991]: I0318 09:51:56.880221 3991 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 09:51:56.880580 master-0 kubenswrapper[3991]: I0318 09:51:56.880398 3991 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 09:51:56.881274 master-0 kubenswrapper[3991]: I0318 09:51:56.881192 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:56.881362 master-0 kubenswrapper[3991]: I0318 09:51:56.881226 3991 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 09:51:56.882192 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 09:51:56.887753 master-0 kubenswrapper[3991]: I0318 09:51:56.887676 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 09:51:56.887753 master-0 kubenswrapper[3991]: I0318 09:51:56.887761 3991 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 09:51:56.888962 master-0 kubenswrapper[3991]: E0318 09:51:56.888881 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:51:56.889379 master-0 kubenswrapper[3991]: I0318 09:51:56.889331 3991 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 09:51:56.890652 master-0 kubenswrapper[3991]: I0318 09:51:56.889708 3991 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 09:51:56.891157 master-0 kubenswrapper[3991]: I0318 09:51:56.891110 3991 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 09:51:56.891480 master-0 kubenswrapper[3991]: I0318 09:51:56.891440 3991 reconstruct.go:97] "Volume reconstruction finished" Mar 18 09:51:56.891480 master-0 kubenswrapper[3991]: I0318 09:51:56.891456 3991 reconciler.go:26] "Reconciler: start to sync state" Mar 18 09:51:56.899410 master-0 kubenswrapper[3991]: I0318 09:51:56.899371 3991 factory.go:55] Registering systemd factory Mar 18 09:51:56.899410 master-0 kubenswrapper[3991]: I0318 09:51:56.899402 3991 factory.go:221] Registration of the systemd container factory successfully Mar 18 09:51:56.899897 master-0 kubenswrapper[3991]: I0318 09:51:56.899816 3991 factory.go:153] Registering CRI-O factory Mar 18 09:51:56.899897 master-0 kubenswrapper[3991]: I0318 09:51:56.899897 3991 factory.go:221] Registration of the crio container factory successfully Mar 18 09:51:56.900118 master-0 kubenswrapper[3991]: I0318 09:51:56.900062 3991 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 09:51:56.900118 master-0 kubenswrapper[3991]: I0318 09:51:56.900106 3991 factory.go:103] Registering Raw factory Mar 18 09:51:56.900275 master-0 kubenswrapper[3991]: I0318 09:51:56.900131 3991 manager.go:1196] Started watching for new ooms in manager Mar 18 09:51:56.900275 master-0 kubenswrapper[3991]: I0318 09:51:56.900137 3991 server.go:449] "Adding debug handlers to kubelet server" Mar 18 09:51:56.901026 master-0 kubenswrapper[3991]: W0318 09:51:56.900449 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:56.901026 master-0 kubenswrapper[3991]: E0318 09:51:56.900642 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:56.901026 master-0 kubenswrapper[3991]: I0318 09:51:56.900959 3991 manager.go:319] Starting recovery of all containers Mar 18 09:51:56.901026 master-0 kubenswrapper[3991]: E0318 09:51:56.901015 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 09:51:56.902411 master-0 kubenswrapper[3991]: E0318 09:51:56.902346 3991 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 09:51:56.904464 master-0 kubenswrapper[3991]: E0318 09:51:56.903041 3991 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189de6ba7c6f39d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.879813072 +0000 UTC m=+0.838753007,LastTimestamp:2026-03-18 09:51:56.879813072 +0000 UTC m=+0.838753007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:51:56.921607 master-0 kubenswrapper[3991]: I0318 09:51:56.921440 3991 manager.go:324] Recovery completed Mar 18 09:51:56.935680 master-0 kubenswrapper[3991]: I0318 09:51:56.935281 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:56.941805 master-0 kubenswrapper[3991]: I0318 09:51:56.941711 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:56.941805 master-0 kubenswrapper[3991]: I0318 09:51:56.941787 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:56.941805 master-0 kubenswrapper[3991]: I0318 09:51:56.941798 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:56.942646 master-0 kubenswrapper[3991]: I0318 09:51:56.942606 3991 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 09:51:56.942646 master-0 kubenswrapper[3991]: I0318 09:51:56.942626 3991 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 09:51:56.942763 master-0 kubenswrapper[3991]: I0318 09:51:56.942655 3991 state_mem.go:36] "Initialized new in-memory state store" Mar 18 09:51:56.990083 master-0 kubenswrapper[3991]: E0318 09:51:56.989968 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:51:57.061706 master-0 kubenswrapper[3991]: I0318 09:51:57.061656 3991 policy_none.go:49] "None policy: Start" Mar 18 09:51:57.062969 master-0 kubenswrapper[3991]: I0318 09:51:57.062931 3991 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 09:51:57.063020 master-0 kubenswrapper[3991]: I0318 09:51:57.062979 3991 state_mem.go:35] "Initializing new in-memory state store" Mar 18 09:51:57.090951 master-0 kubenswrapper[3991]: E0318 09:51:57.090887 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:51:57.103605 master-0 kubenswrapper[3991]: E0318 09:51:57.103474 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.138088 3991 manager.go:334] "Starting Device Plugin manager" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.138204 3991 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.138232 3991 server.go:79] "Starting device plugin registration server" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.139017 3991 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.139073 3991 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.139409 3991 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.139571 3991 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.139592 3991 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: E0318 09:51:57.142195 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.145801 3991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.148367 3991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.148443 3991 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: I0318 09:51:57.148602 3991 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: E0318 09:51:57.148693 3991 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: W0318 09:51:57.150344 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:57.155766 master-0 kubenswrapper[3991]: E0318 09:51:57.150443 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:57.240585 master-0 kubenswrapper[3991]: I0318 09:51:57.240337 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.241942 master-0 kubenswrapper[3991]: I0318 09:51:57.241888 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.242088 master-0 kubenswrapper[3991]: I0318 09:51:57.242032 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.242088 master-0 kubenswrapper[3991]: I0318 09:51:57.242072 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.242156 master-0 kubenswrapper[3991]: I0318 09:51:57.242119 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:51:57.243452 master-0 kubenswrapper[3991]: E0318 09:51:57.243394 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:51:57.249576 master-0 kubenswrapper[3991]: I0318 09:51:57.249526 3991 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 09:51:57.249658 master-0 kubenswrapper[3991]: I0318 09:51:57.249596 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.250848 master-0 kubenswrapper[3991]: I0318 09:51:57.250790 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.250922 master-0 kubenswrapper[3991]: I0318 09:51:57.250860 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.250922 master-0 kubenswrapper[3991]: I0318 09:51:57.250878 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.251085 master-0 kubenswrapper[3991]: I0318 09:51:57.251042 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.251698 master-0 kubenswrapper[3991]: I0318 09:51:57.251550 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.251698 master-0 kubenswrapper[3991]: I0318 09:51:57.251653 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.251994 master-0 kubenswrapper[3991]: I0318 09:51:57.251872 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.251994 master-0 kubenswrapper[3991]: I0318 09:51:57.251929 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.251994 master-0 kubenswrapper[3991]: I0318 09:51:57.251958 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.252192 master-0 kubenswrapper[3991]: I0318 09:51:57.252137 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.252311 master-0 kubenswrapper[3991]: I0318 09:51:57.252277 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:51:57.252347 master-0 kubenswrapper[3991]: I0318 09:51:57.252334 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.252722 master-0 kubenswrapper[3991]: I0318 09:51:57.252689 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.252777 master-0 kubenswrapper[3991]: I0318 09:51:57.252731 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.252777 master-0 kubenswrapper[3991]: I0318 09:51:57.252748 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.253159 master-0 kubenswrapper[3991]: I0318 09:51:57.253134 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.253159 master-0 kubenswrapper[3991]: I0318 09:51:57.253152 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.253238 master-0 kubenswrapper[3991]: I0318 09:51:57.253181 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.253238 master-0 kubenswrapper[3991]: I0318 09:51:57.253192 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.253238 master-0 kubenswrapper[3991]: I0318 09:51:57.253158 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.253320 master-0 kubenswrapper[3991]: I0318 09:51:57.253266 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.253408 master-0 kubenswrapper[3991]: I0318 09:51:57.253389 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.253510 master-0 kubenswrapper[3991]: I0318 09:51:57.253493 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:51:57.253540 master-0 kubenswrapper[3991]: I0318 09:51:57.253525 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.255561 master-0 kubenswrapper[3991]: I0318 09:51:57.255487 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.255561 master-0 kubenswrapper[3991]: I0318 09:51:57.255506 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.255561 master-0 kubenswrapper[3991]: I0318 09:51:57.255519 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.255561 master-0 kubenswrapper[3991]: I0318 09:51:57.255535 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.255986 master-0 kubenswrapper[3991]: I0318 09:51:57.255524 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.255986 master-0 kubenswrapper[3991]: I0318 09:51:57.255665 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.255986 master-0 kubenswrapper[3991]: I0318 09:51:57.255922 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.256110 master-0 kubenswrapper[3991]: I0318 09:51:57.255993 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:51:57.256110 master-0 kubenswrapper[3991]: I0318 09:51:57.256018 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.256907 master-0 kubenswrapper[3991]: I0318 09:51:57.256798 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.256907 master-0 kubenswrapper[3991]: I0318 09:51:57.256869 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.256907 master-0 kubenswrapper[3991]: I0318 09:51:57.256889 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.257185 master-0 kubenswrapper[3991]: I0318 09:51:57.257030 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.257185 master-0 kubenswrapper[3991]: I0318 09:51:57.257071 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.257185 master-0 kubenswrapper[3991]: I0318 09:51:57.257091 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.257368 master-0 kubenswrapper[3991]: I0318 09:51:57.257335 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.257428 master-0 kubenswrapper[3991]: I0318 09:51:57.257385 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.258200 master-0 kubenswrapper[3991]: I0318 09:51:57.258153 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.258200 master-0 kubenswrapper[3991]: I0318 09:51:57.258195 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.258354 master-0 kubenswrapper[3991]: I0318 09:51:57.258215 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.293722 master-0 kubenswrapper[3991]: I0318 09:51:57.293567 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:51:57.293722 master-0 kubenswrapper[3991]: I0318 09:51:57.293635 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.293722 master-0 kubenswrapper[3991]: I0318 09:51:57.293728 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:51:57.294143 master-0 kubenswrapper[3991]: I0318 09:51:57.293762 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.294143 master-0 kubenswrapper[3991]: I0318 09:51:57.293800 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.294143 master-0 kubenswrapper[3991]: I0318 09:51:57.293870 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.294143 master-0 kubenswrapper[3991]: I0318 09:51:57.293930 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:51:57.294143 master-0 kubenswrapper[3991]: I0318 09:51:57.293981 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:51:57.294143 master-0 kubenswrapper[3991]: I0318 09:51:57.294069 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.294143 master-0 kubenswrapper[3991]: I0318 09:51:57.294142 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:51:57.294791 master-0 kubenswrapper[3991]: I0318 09:51:57.294209 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.294791 master-0 kubenswrapper[3991]: I0318 09:51:57.294260 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:51:57.294791 master-0 kubenswrapper[3991]: I0318 09:51:57.294305 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.294791 master-0 kubenswrapper[3991]: I0318 09:51:57.294354 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.294791 master-0 kubenswrapper[3991]: I0318 09:51:57.294422 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.294791 master-0 kubenswrapper[3991]: I0318 09:51:57.294469 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.294791 master-0 kubenswrapper[3991]: I0318 09:51:57.294517 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.395365 master-0 kubenswrapper[3991]: I0318 09:51:57.395272 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395365 master-0 kubenswrapper[3991]: I0318 09:51:57.395356 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395365 master-0 kubenswrapper[3991]: I0318 09:51:57.395382 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.395365 master-0 kubenswrapper[3991]: I0318 09:51:57.395400 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395418 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395414 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395437 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395454 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395470 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395484 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395498 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395520 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395494 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395572 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395518 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395558 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395498 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395596 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395647 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395597 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.395952 master-0 kubenswrapper[3991]: I0318 09:51:57.395763 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.395631 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.395650 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.395615 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.395643 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.395943 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.395950 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.396000 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.396071 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.396142 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.396196 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.396209 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.396253 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:51:57.396630 master-0 kubenswrapper[3991]: I0318 09:51:57.396263 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.444931 master-0 kubenswrapper[3991]: I0318 09:51:57.444508 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.446333 master-0 kubenswrapper[3991]: I0318 09:51:57.446272 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.446447 master-0 kubenswrapper[3991]: I0318 09:51:57.446347 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.446447 master-0 kubenswrapper[3991]: I0318 09:51:57.446370 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.446569 master-0 kubenswrapper[3991]: I0318 09:51:57.446496 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:51:57.448011 master-0 kubenswrapper[3991]: E0318 09:51:57.447951 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:51:57.505383 master-0 kubenswrapper[3991]: E0318 09:51:57.505172 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 09:51:57.581623 master-0 kubenswrapper[3991]: I0318 09:51:57.581515 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:51:57.599931 master-0 kubenswrapper[3991]: I0318 09:51:57.599813 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:51:57.608698 master-0 kubenswrapper[3991]: I0318 09:51:57.608319 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:51:57.630470 master-0 kubenswrapper[3991]: I0318 09:51:57.630372 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:51:57.637989 master-0 kubenswrapper[3991]: I0318 09:51:57.637640 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:51:57.757305 master-0 kubenswrapper[3991]: W0318 09:51:57.757137 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:57.757305 master-0 kubenswrapper[3991]: E0318 09:51:57.757207 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:57.848763 master-0 kubenswrapper[3991]: I0318 09:51:57.848621 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:57.850028 master-0 kubenswrapper[3991]: I0318 09:51:57.849987 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:57.850091 master-0 kubenswrapper[3991]: I0318 09:51:57.850041 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:57.850091 master-0 kubenswrapper[3991]: I0318 09:51:57.850059 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:57.850155 master-0 kubenswrapper[3991]: I0318 09:51:57.850114 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:51:57.851074 master-0 kubenswrapper[3991]: E0318 09:51:57.851019 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:51:57.856950 master-0 kubenswrapper[3991]: W0318 09:51:57.856849 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:57.857095 master-0 kubenswrapper[3991]: E0318 09:51:57.856969 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:57.883076 master-0 kubenswrapper[3991]: I0318 09:51:57.882992 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:58.240547 master-0 kubenswrapper[3991]: W0318 09:51:58.240421 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:58.240547 master-0 kubenswrapper[3991]: E0318 09:51:58.240541 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:58.307462 master-0 kubenswrapper[3991]: E0318 09:51:58.307366 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 09:51:58.343713 master-0 kubenswrapper[3991]: W0318 09:51:58.343611 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:58.344073 master-0 kubenswrapper[3991]: E0318 09:51:58.343718 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:58.652203 master-0 kubenswrapper[3991]: I0318 09:51:58.652038 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:51:58.653583 master-0 kubenswrapper[3991]: I0318 09:51:58.653535 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:51:58.653717 master-0 kubenswrapper[3991]: I0318 09:51:58.653589 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:51:58.653717 master-0 kubenswrapper[3991]: I0318 09:51:58.653608 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:51:58.653717 master-0 kubenswrapper[3991]: I0318 09:51:58.653672 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:51:58.654890 master-0 kubenswrapper[3991]: E0318 09:51:58.654799 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:51:58.864280 master-0 kubenswrapper[3991]: I0318 09:51:58.864177 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 09:51:58.865974 master-0 kubenswrapper[3991]: E0318 09:51:58.865920 3991 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:51:58.882542 master-0 kubenswrapper[3991]: I0318 09:51:58.882480 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:59.480315 master-0 kubenswrapper[3991]: W0318 09:51:59.480208 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83737980b9ee109184b1d78e942cf36.slice/crio-5ad9370766ae18aa384f3b2f07e9d3cada2bbe156f6bcba4f02016b49f4e713f WatchSource:0}: Error finding container 5ad9370766ae18aa384f3b2f07e9d3cada2bbe156f6bcba4f02016b49f4e713f: Status 404 returned error can't find the container with id 5ad9370766ae18aa384f3b2f07e9d3cada2bbe156f6bcba4f02016b49f4e713f Mar 18 09:51:59.527568 master-0 kubenswrapper[3991]: I0318 09:51:59.527515 3991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 09:51:59.539300 master-0 kubenswrapper[3991]: W0318 09:51:59.539216 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-401219e24c2bd7d9e48328027e1c78136e8f25304b76126b40b8362b04997723 WatchSource:0}: Error finding container 401219e24c2bd7d9e48328027e1c78136e8f25304b76126b40b8362b04997723: Status 404 returned error can't find the container with id 401219e24c2bd7d9e48328027e1c78136e8f25304b76126b40b8362b04997723 Mar 18 09:51:59.737976 master-0 kubenswrapper[3991]: W0318 09:51:59.737898 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1249822f86f23526277d165c0d5d3c19.slice/crio-5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd WatchSource:0}: Error finding container 5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd: Status 404 returned error can't find the container with id 5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd Mar 18 09:51:59.828879 master-0 kubenswrapper[3991]: W0318 09:51:59.828767 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f265536aba6292ead501bc9b49f327.slice/crio-cd2ad4fe81a1a347f10f858030eebc98abfffaf65eba926cffe2c8990ddb0614 WatchSource:0}: Error finding container cd2ad4fe81a1a347f10f858030eebc98abfffaf65eba926cffe2c8990ddb0614: Status 404 returned error can't find the container with id cd2ad4fe81a1a347f10f858030eebc98abfffaf65eba926cffe2c8990ddb0614 Mar 18 09:51:59.876398 master-0 kubenswrapper[3991]: E0318 09:51:59.876237 3991 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189de6ba7c6f39d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.879813072 +0000 UTC m=+0.838753007,LastTimestamp:2026-03-18 09:51:56.879813072 +0000 UTC m=+0.838753007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:51:59.882258 master-0 kubenswrapper[3991]: I0318 09:51:59.882184 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:51:59.908995 master-0 kubenswrapper[3991]: E0318 09:51:59.908916 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 09:52:00.044443 master-0 kubenswrapper[3991]: W0318 09:52:00.044359 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a6d0d2a24360dee10612610f1b59.slice/crio-cf5a74b27454f5e8b1c18f8ef6d030c5b30a033cbc5baf882408ad3e065176ae WatchSource:0}: Error finding container cf5a74b27454f5e8b1c18f8ef6d030c5b30a033cbc5baf882408ad3e065176ae: Status 404 returned error can't find the container with id cf5a74b27454f5e8b1c18f8ef6d030c5b30a033cbc5baf882408ad3e065176ae Mar 18 09:52:00.162361 master-0 kubenswrapper[3991]: I0318 09:52:00.162186 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"cf5a74b27454f5e8b1c18f8ef6d030c5b30a033cbc5baf882408ad3e065176ae"} Mar 18 09:52:00.163802 master-0 kubenswrapper[3991]: I0318 09:52:00.163713 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"cd2ad4fe81a1a347f10f858030eebc98abfffaf65eba926cffe2c8990ddb0614"} Mar 18 09:52:00.164921 master-0 kubenswrapper[3991]: I0318 09:52:00.164876 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd"} Mar 18 09:52:00.166420 master-0 kubenswrapper[3991]: I0318 09:52:00.166077 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"401219e24c2bd7d9e48328027e1c78136e8f25304b76126b40b8362b04997723"} Mar 18 09:52:00.167119 master-0 kubenswrapper[3991]: I0318 09:52:00.167049 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"5ad9370766ae18aa384f3b2f07e9d3cada2bbe156f6bcba4f02016b49f4e713f"} Mar 18 09:52:00.255297 master-0 kubenswrapper[3991]: I0318 09:52:00.255215 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:00.256775 master-0 kubenswrapper[3991]: I0318 09:52:00.256748 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:00.256877 master-0 kubenswrapper[3991]: I0318 09:52:00.256792 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:00.256877 master-0 kubenswrapper[3991]: I0318 09:52:00.256805 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:00.256951 master-0 kubenswrapper[3991]: I0318 09:52:00.256880 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:52:00.257779 master-0 kubenswrapper[3991]: E0318 09:52:00.257726 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:52:00.682891 master-0 kubenswrapper[3991]: W0318 09:52:00.682811 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:00.682990 master-0 kubenswrapper[3991]: E0318 09:52:00.682890 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:00.790218 master-0 kubenswrapper[3991]: W0318 09:52:00.790133 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:00.790394 master-0 kubenswrapper[3991]: E0318 09:52:00.790226 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:00.827722 master-0 kubenswrapper[3991]: W0318 09:52:00.827647 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:00.827870 master-0 kubenswrapper[3991]: E0318 09:52:00.827735 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:00.882055 master-0 kubenswrapper[3991]: I0318 09:52:00.882006 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:01.453946 master-0 kubenswrapper[3991]: W0318 09:52:01.453889 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:01.453946 master-0 kubenswrapper[3991]: E0318 09:52:01.453944 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:01.882805 master-0 kubenswrapper[3991]: I0318 09:52:01.882749 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:02.882357 master-0 kubenswrapper[3991]: I0318 09:52:02.882290 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:03.110648 master-0 kubenswrapper[3991]: E0318 09:52:03.110584 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 09:52:03.201874 master-0 kubenswrapper[3991]: I0318 09:52:03.201735 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 09:52:03.202913 master-0 kubenswrapper[3991]: E0318 09:52:03.202880 3991 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:03.458213 master-0 kubenswrapper[3991]: I0318 09:52:03.458092 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:03.459343 master-0 kubenswrapper[3991]: I0318 09:52:03.459304 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:03.459410 master-0 kubenswrapper[3991]: I0318 09:52:03.459354 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:03.459410 master-0 kubenswrapper[3991]: I0318 09:52:03.459366 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:03.459410 master-0 kubenswrapper[3991]: I0318 09:52:03.459406 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:52:03.460232 master-0 kubenswrapper[3991]: E0318 09:52:03.460186 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:52:03.883232 master-0 kubenswrapper[3991]: I0318 09:52:03.883162 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:04.629040 master-0 kubenswrapper[3991]: W0318 09:52:04.628655 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:04.629040 master-0 kubenswrapper[3991]: E0318 09:52:04.628707 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:04.882903 master-0 kubenswrapper[3991]: I0318 09:52:04.882755 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:04.957985 master-0 kubenswrapper[3991]: W0318 09:52:04.957909 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:04.957985 master-0 kubenswrapper[3991]: E0318 09:52:04.957965 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:05.883089 master-0 kubenswrapper[3991]: I0318 09:52:05.883029 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:06.079377 master-0 kubenswrapper[3991]: W0318 09:52:06.079288 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:06.079377 master-0 kubenswrapper[3991]: E0318 09:52:06.079362 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:06.471076 master-0 kubenswrapper[3991]: W0318 09:52:06.470984 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:06.471076 master-0 kubenswrapper[3991]: E0318 09:52:06.471070 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:06.882783 master-0 kubenswrapper[3991]: I0318 09:52:06.882679 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:07.142586 master-0 kubenswrapper[3991]: E0318 09:52:07.142388 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 09:52:07.882852 master-0 kubenswrapper[3991]: I0318 09:52:07.882752 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:08.882181 master-0 kubenswrapper[3991]: I0318 09:52:08.882131 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:09.516228 master-0 kubenswrapper[3991]: E0318 09:52:09.516157 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Mar 18 09:52:09.861407 master-0 kubenswrapper[3991]: I0318 09:52:09.861339 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:09.862646 master-0 kubenswrapper[3991]: I0318 09:52:09.862600 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:09.862701 master-0 kubenswrapper[3991]: I0318 09:52:09.862666 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:09.862701 master-0 kubenswrapper[3991]: I0318 09:52:09.862690 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:09.862795 master-0 kubenswrapper[3991]: I0318 09:52:09.862766 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:52:09.863780 master-0 kubenswrapper[3991]: E0318 09:52:09.863720 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:52:09.878305 master-0 kubenswrapper[3991]: E0318 09:52:09.878091 3991 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189de6ba7c6f39d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.879813072 +0000 UTC m=+0.838753007,LastTimestamp:2026-03-18 09:51:56.879813072 +0000 UTC m=+0.838753007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:09.882100 master-0 kubenswrapper[3991]: I0318 09:52:09.882057 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:10.882740 master-0 kubenswrapper[3991]: I0318 09:52:10.882684 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:11.189103 master-0 kubenswrapper[3991]: I0318 09:52:11.189039 3991 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="a88536111853576d542216418fa9e6a7c0a796244d77dbfb3568461d1ad235ad" exitCode=0 Mar 18 09:52:11.189201 master-0 kubenswrapper[3991]: I0318 09:52:11.189127 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:11.189201 master-0 kubenswrapper[3991]: I0318 09:52:11.189131 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"a88536111853576d542216418fa9e6a7c0a796244d77dbfb3568461d1ad235ad"} Mar 18 09:52:11.189859 master-0 kubenswrapper[3991]: I0318 09:52:11.189800 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:11.189948 master-0 kubenswrapper[3991]: I0318 09:52:11.189871 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:11.189948 master-0 kubenswrapper[3991]: I0318 09:52:11.189886 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:11.191141 master-0 kubenswrapper[3991]: I0318 09:52:11.191028 3991 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="0e1b90509e26fef960c00500d9ad97c317d8639e8d0264437904c7c3c438399a" exitCode=0 Mar 18 09:52:11.191141 master-0 kubenswrapper[3991]: I0318 09:52:11.191107 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:11.191263 master-0 kubenswrapper[3991]: I0318 09:52:11.191105 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"0e1b90509e26fef960c00500d9ad97c317d8639e8d0264437904c7c3c438399a"} Mar 18 09:52:11.192310 master-0 kubenswrapper[3991]: I0318 09:52:11.191736 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:11.192310 master-0 kubenswrapper[3991]: I0318 09:52:11.191775 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:11.192310 master-0 kubenswrapper[3991]: I0318 09:52:11.191787 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:11.192456 master-0 kubenswrapper[3991]: I0318 09:52:11.192325 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5"} Mar 18 09:52:11.450251 master-0 kubenswrapper[3991]: W0318 09:52:11.450122 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:11.450251 master-0 kubenswrapper[3991]: E0318 09:52:11.450243 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:11.493673 master-0 kubenswrapper[3991]: I0318 09:52:11.493584 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 09:52:11.495371 master-0 kubenswrapper[3991]: E0318 09:52:11.495312 3991 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:52:11.569571 master-0 kubenswrapper[3991]: I0318 09:52:11.569078 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:11.570713 master-0 kubenswrapper[3991]: I0318 09:52:11.570650 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:11.570795 master-0 kubenswrapper[3991]: I0318 09:52:11.570721 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:11.570795 master-0 kubenswrapper[3991]: I0318 09:52:11.570742 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:11.883199 master-0 kubenswrapper[3991]: I0318 09:52:11.883033 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:52:12.266219 master-0 kubenswrapper[3991]: I0318 09:52:12.266168 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:12.266376 master-0 kubenswrapper[3991]: I0318 09:52:12.266159 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"5230f2c731392582b4c5b7f1d1739dca596269f4bff091decf0daf9fa0a42c23"} Mar 18 09:52:12.267464 master-0 kubenswrapper[3991]: I0318 09:52:12.267423 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:12.267522 master-0 kubenswrapper[3991]: I0318 09:52:12.267468 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:12.267522 master-0 kubenswrapper[3991]: I0318 09:52:12.267485 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:12.268514 master-0 kubenswrapper[3991]: I0318 09:52:12.268469 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57"} Mar 18 09:52:13.272626 master-0 kubenswrapper[3991]: I0318 09:52:13.272565 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"0f4ef82cd98a641ac2372a9202df576de9d16287dc2775cc6c0529b93f52b3e6"} Mar 18 09:52:13.274293 master-0 kubenswrapper[3991]: I0318 09:52:13.274260 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 09:52:13.275277 master-0 kubenswrapper[3991]: I0318 09:52:13.275233 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"298568714a91c392789c4d35479df7f1885608033896abf7a1e846e24cce84f8"} Mar 18 09:52:13.275327 master-0 kubenswrapper[3991]: I0318 09:52:13.275306 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:13.275495 master-0 kubenswrapper[3991]: I0318 09:52:13.274856 3991 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="298568714a91c392789c4d35479df7f1885608033896abf7a1e846e24cce84f8" exitCode=1 Mar 18 09:52:13.278958 master-0 kubenswrapper[3991]: I0318 09:52:13.278912 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:13.279076 master-0 kubenswrapper[3991]: I0318 09:52:13.279050 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:13.279113 master-0 kubenswrapper[3991]: I0318 09:52:13.279084 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:13.279595 master-0 kubenswrapper[3991]: I0318 09:52:13.279561 3991 scope.go:117] "RemoveContainer" containerID="298568714a91c392789c4d35479df7f1885608033896abf7a1e846e24cce84f8" Mar 18 09:52:13.281273 master-0 kubenswrapper[3991]: I0318 09:52:13.281216 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"5a898e220fc5eed6a4a32559913535749eb16cc2a7cd17e978e4c62aa7e6452a"} Mar 18 09:52:13.281273 master-0 kubenswrapper[3991]: I0318 09:52:13.281259 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:13.281355 master-0 kubenswrapper[3991]: I0318 09:52:13.281308 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:13.282430 master-0 kubenswrapper[3991]: I0318 09:52:13.282390 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:13.282484 master-0 kubenswrapper[3991]: I0318 09:52:13.282434 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:13.282525 master-0 kubenswrapper[3991]: I0318 09:52:13.282488 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:13.282525 master-0 kubenswrapper[3991]: I0318 09:52:13.282511 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:13.282595 master-0 kubenswrapper[3991]: I0318 09:52:13.282453 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:13.282595 master-0 kubenswrapper[3991]: I0318 09:52:13.282572 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:14.283901 master-0 kubenswrapper[3991]: I0318 09:52:14.283837 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 09:52:14.284554 master-0 kubenswrapper[3991]: I0318 09:52:14.284514 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 09:52:14.284876 master-0 kubenswrapper[3991]: I0318 09:52:14.284840 3991 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="90da8b24f7ea8d20a3716f3037b813048757e01bbb908d8dd97ca491e4848ef7" exitCode=1 Mar 18 09:52:14.284939 master-0 kubenswrapper[3991]: I0318 09:52:14.284869 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"90da8b24f7ea8d20a3716f3037b813048757e01bbb908d8dd97ca491e4848ef7"} Mar 18 09:52:14.284939 master-0 kubenswrapper[3991]: I0318 09:52:14.284912 3991 scope.go:117] "RemoveContainer" containerID="298568714a91c392789c4d35479df7f1885608033896abf7a1e846e24cce84f8" Mar 18 09:52:14.284939 master-0 kubenswrapper[3991]: I0318 09:52:14.284934 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:14.285658 master-0 kubenswrapper[3991]: I0318 09:52:14.285626 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:14.285658 master-0 kubenswrapper[3991]: I0318 09:52:14.285652 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:14.285658 master-0 kubenswrapper[3991]: I0318 09:52:14.285661 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:14.285957 master-0 kubenswrapper[3991]: I0318 09:52:14.285932 3991 scope.go:117] "RemoveContainer" containerID="90da8b24f7ea8d20a3716f3037b813048757e01bbb908d8dd97ca491e4848ef7" Mar 18 09:52:14.286082 master-0 kubenswrapper[3991]: E0318 09:52:14.286052 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 09:52:14.492413 master-0 kubenswrapper[3991]: I0318 09:52:14.491907 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:14.894545 master-0 kubenswrapper[3991]: I0318 09:52:14.889266 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:15.288540 master-0 kubenswrapper[3991]: I0318 09:52:15.288467 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 09:52:15.289775 master-0 kubenswrapper[3991]: I0318 09:52:15.289731 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:15.290880 master-0 kubenswrapper[3991]: I0318 09:52:15.290807 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:15.290880 master-0 kubenswrapper[3991]: I0318 09:52:15.290875 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:15.290955 master-0 kubenswrapper[3991]: I0318 09:52:15.290889 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:15.291352 master-0 kubenswrapper[3991]: I0318 09:52:15.291326 3991 scope.go:117] "RemoveContainer" containerID="90da8b24f7ea8d20a3716f3037b813048757e01bbb908d8dd97ca491e4848ef7" Mar 18 09:52:15.291555 master-0 kubenswrapper[3991]: E0318 09:52:15.291522 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 09:52:15.890355 master-0 kubenswrapper[3991]: I0318 09:52:15.890223 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:16.293762 master-0 kubenswrapper[3991]: I0318 09:52:16.293691 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"a8a79bb9813c53d6a7944ac3a61efc1cc0406057f3915265e59c26643cc48a9e"} Mar 18 09:52:16.293762 master-0 kubenswrapper[3991]: I0318 09:52:16.293730 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:16.294908 master-0 kubenswrapper[3991]: I0318 09:52:16.294869 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:16.294908 master-0 kubenswrapper[3991]: I0318 09:52:16.294906 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:16.294908 master-0 kubenswrapper[3991]: I0318 09:52:16.294915 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:16.296902 master-0 kubenswrapper[3991]: I0318 09:52:16.296841 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"f94e501b0ad12236c03bc538f983952a18a8058deb0777210379742bce193fde"} Mar 18 09:52:16.297024 master-0 kubenswrapper[3991]: I0318 09:52:16.296984 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:16.297657 master-0 kubenswrapper[3991]: I0318 09:52:16.297624 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:16.297657 master-0 kubenswrapper[3991]: I0318 09:52:16.297654 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:16.297754 master-0 kubenswrapper[3991]: I0318 09:52:16.297665 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:16.421408 master-0 kubenswrapper[3991]: I0318 09:52:16.421328 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:52:16.521761 master-0 kubenswrapper[3991]: E0318 09:52:16.521569 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 09:52:16.748215 master-0 kubenswrapper[3991]: I0318 09:52:16.748098 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:52:16.864388 master-0 kubenswrapper[3991]: I0318 09:52:16.864305 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:16.865760 master-0 kubenswrapper[3991]: I0318 09:52:16.865715 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:16.865760 master-0 kubenswrapper[3991]: I0318 09:52:16.865763 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:16.865903 master-0 kubenswrapper[3991]: I0318 09:52:16.865780 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:16.865903 master-0 kubenswrapper[3991]: I0318 09:52:16.865885 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:52:16.873658 master-0 kubenswrapper[3991]: E0318 09:52:16.873573 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 09:52:16.886191 master-0 kubenswrapper[3991]: I0318 09:52:16.886136 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:17.143034 master-0 kubenswrapper[3991]: E0318 09:52:17.142887 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 09:52:17.300255 master-0 kubenswrapper[3991]: I0318 09:52:17.300179 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:17.300255 master-0 kubenswrapper[3991]: I0318 09:52:17.300249 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:52:17.300888 master-0 kubenswrapper[3991]: I0318 09:52:17.300210 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:17.301415 master-0 kubenswrapper[3991]: I0318 09:52:17.301376 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:17.301473 master-0 kubenswrapper[3991]: I0318 09:52:17.301425 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:17.301473 master-0 kubenswrapper[3991]: I0318 09:52:17.301443 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:17.301616 master-0 kubenswrapper[3991]: I0318 09:52:17.301569 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:17.301659 master-0 kubenswrapper[3991]: I0318 09:52:17.301640 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:17.301699 master-0 kubenswrapper[3991]: I0318 09:52:17.301659 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:17.487571 master-0 kubenswrapper[3991]: W0318 09:52:17.487430 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 18 09:52:17.487571 master-0 kubenswrapper[3991]: E0318 09:52:17.487521 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 09:52:17.605032 master-0 kubenswrapper[3991]: W0318 09:52:17.604924 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 18 09:52:17.605032 master-0 kubenswrapper[3991]: E0318 09:52:17.604994 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 09:52:17.891309 master-0 kubenswrapper[3991]: I0318 09:52:17.891216 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:18.302111 master-0 kubenswrapper[3991]: I0318 09:52:18.302057 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:18.303204 master-0 kubenswrapper[3991]: I0318 09:52:18.303185 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:18.303252 master-0 kubenswrapper[3991]: I0318 09:52:18.303229 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:18.303252 master-0 kubenswrapper[3991]: I0318 09:52:18.303246 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:18.522084 master-0 kubenswrapper[3991]: I0318 09:52:18.521978 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:52:18.522357 master-0 kubenswrapper[3991]: I0318 09:52:18.522208 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:18.523676 master-0 kubenswrapper[3991]: I0318 09:52:18.523613 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:18.523676 master-0 kubenswrapper[3991]: I0318 09:52:18.523672 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:18.523947 master-0 kubenswrapper[3991]: I0318 09:52:18.523699 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:18.529652 master-0 kubenswrapper[3991]: I0318 09:52:18.529593 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:52:18.890148 master-0 kubenswrapper[3991]: I0318 09:52:18.890034 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:19.038766 master-0 kubenswrapper[3991]: W0318 09:52:19.038716 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 18 09:52:19.039112 master-0 kubenswrapper[3991]: E0318 09:52:19.038797 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 09:52:19.305320 master-0 kubenswrapper[3991]: I0318 09:52:19.305238 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:19.306289 master-0 kubenswrapper[3991]: I0318 09:52:19.305475 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:52:19.306505 master-0 kubenswrapper[3991]: I0318 09:52:19.306413 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:19.306505 master-0 kubenswrapper[3991]: I0318 09:52:19.306498 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:19.306698 master-0 kubenswrapper[3991]: I0318 09:52:19.306527 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:19.312427 master-0 kubenswrapper[3991]: I0318 09:52:19.312367 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:52:19.885049 master-0 kubenswrapper[3991]: I0318 09:52:19.884961 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:19.886036 master-0 kubenswrapper[3991]: E0318 09:52:19.885792 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba7c6f39d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.879813072 +0000 UTC m=+0.838753007,LastTimestamp:2026-03-18 09:51:56.879813072 +0000 UTC m=+0.838753007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.888786 master-0 kubenswrapper[3991]: E0318 09:52:19.888608 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80209f57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,LastTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.895196 master-0 kubenswrapper[3991]: E0318 09:52:19.895037 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba8020fb5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941794143 +0000 UTC m=+0.900734038,LastTimestamp:2026-03-18 09:51:56.941794143 +0000 UTC m=+0.900734038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.900879 master-0 kubenswrapper[3991]: E0318 09:52:19.900612 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80211ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941802663 +0000 UTC m=+0.900742558,LastTimestamp:2026-03-18 09:51:56.941802663 +0000 UTC m=+0.900742558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.907038 master-0 kubenswrapper[3991]: E0318 09:52:19.906792 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba8c2d5b55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:57.143931733 +0000 UTC m=+1.102871668,LastTimestamp:2026-03-18 09:51:57.143931733 +0000 UTC m=+1.102871668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.913320 master-0 kubenswrapper[3991]: E0318 09:52:19.913173 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80209f57\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80209f57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,LastTimestamp:2026-03-18 09:51:57.241938882 +0000 UTC m=+1.200878857,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.919036 master-0 kubenswrapper[3991]: E0318 09:52:19.918890 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba8020fb5f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba8020fb5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941794143 +0000 UTC m=+0.900734038,LastTimestamp:2026-03-18 09:51:57.242062365 +0000 UTC m=+1.201002300,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.927220 master-0 kubenswrapper[3991]: E0318 09:52:19.927083 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80211ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80211ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941802663 +0000 UTC m=+0.900742558,LastTimestamp:2026-03-18 09:51:57.242085655 +0000 UTC m=+1.201025590,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.932411 master-0 kubenswrapper[3991]: E0318 09:52:19.932235 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80209f57\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80209f57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,LastTimestamp:2026-03-18 09:51:57.250819738 +0000 UTC m=+1.209781554,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.939684 master-0 kubenswrapper[3991]: E0318 09:52:19.939516 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba8020fb5f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba8020fb5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941794143 +0000 UTC m=+0.900734038,LastTimestamp:2026-03-18 09:51:57.25087224 +0000 UTC m=+1.209812165,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.946679 master-0 kubenswrapper[3991]: E0318 09:52:19.946533 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80211ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80211ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941802663 +0000 UTC m=+0.900742558,LastTimestamp:2026-03-18 09:51:57.25088808 +0000 UTC m=+1.209828015,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.951998 master-0 kubenswrapper[3991]: E0318 09:52:19.951799 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80209f57\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80209f57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,LastTimestamp:2026-03-18 09:51:57.251904504 +0000 UTC m=+1.210844429,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.959196 master-0 kubenswrapper[3991]: E0318 09:52:19.959050 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba8020fb5f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba8020fb5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941794143 +0000 UTC m=+0.900734038,LastTimestamp:2026-03-18 09:51:57.251949515 +0000 UTC m=+1.210889450,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.964290 master-0 kubenswrapper[3991]: E0318 09:52:19.964156 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80211ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80211ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941802663 +0000 UTC m=+0.900742558,LastTimestamp:2026-03-18 09:51:57.251969075 +0000 UTC m=+1.210909010,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.971765 master-0 kubenswrapper[3991]: E0318 09:52:19.971596 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80209f57\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80209f57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,LastTimestamp:2026-03-18 09:51:57.252717783 +0000 UTC m=+1.211657698,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.977333 master-0 kubenswrapper[3991]: E0318 09:52:19.977208 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba8020fb5f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba8020fb5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941794143 +0000 UTC m=+0.900734038,LastTimestamp:2026-03-18 09:51:57.252741463 +0000 UTC m=+1.211681378,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.983023 master-0 kubenswrapper[3991]: E0318 09:52:19.982808 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80211ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80211ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941802663 +0000 UTC m=+0.900742558,LastTimestamp:2026-03-18 09:51:57.252756233 +0000 UTC m=+1.211696148,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.988755 master-0 kubenswrapper[3991]: E0318 09:52:19.988621 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80209f57\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80209f57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,LastTimestamp:2026-03-18 09:51:57.253149843 +0000 UTC m=+1.212089748,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:19.996675 master-0 kubenswrapper[3991]: E0318 09:52:19.996417 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80209f57\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80209f57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,LastTimestamp:2026-03-18 09:51:57.253172243 +0000 UTC m=+1.212112138,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.005020 master-0 kubenswrapper[3991]: E0318 09:52:20.004861 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba8020fb5f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba8020fb5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941794143 +0000 UTC m=+0.900734038,LastTimestamp:2026-03-18 09:51:57.253188454 +0000 UTC m=+1.212128349,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.013918 master-0 kubenswrapper[3991]: E0318 09:52:20.013711 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80211ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80211ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941802663 +0000 UTC m=+0.900742558,LastTimestamp:2026-03-18 09:51:57.253197244 +0000 UTC m=+1.212137139,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.019429 master-0 kubenswrapper[3991]: E0318 09:52:20.019274 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba8020fb5f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba8020fb5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941794143 +0000 UTC m=+0.900734038,LastTimestamp:2026-03-18 09:51:57.253249965 +0000 UTC m=+1.212189900,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.025996 master-0 kubenswrapper[3991]: E0318 09:52:20.025782 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80211ca7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80211ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941802663 +0000 UTC m=+0.900742558,LastTimestamp:2026-03-18 09:51:57.253282086 +0000 UTC m=+1.212222011,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.032597 master-0 kubenswrapper[3991]: E0318 09:52:20.032467 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80209f57\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80209f57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,LastTimestamp:2026-03-18 09:51:57.255511608 +0000 UTC m=+1.214451513,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.037109 master-0 kubenswrapper[3991]: E0318 09:52:20.036980 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de6ba80209f57\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de6ba80209f57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:56.941770583 +0000 UTC m=+0.900710478,LastTimestamp:2026-03-18 09:51:57.255517678 +0000 UTC m=+1.214457593,Count:9,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.043508 master-0 kubenswrapper[3991]: E0318 09:52:20.043363 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de6bb1a3e9ccd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:59.527427277 +0000 UTC m=+3.486367222,LastTimestamp:2026-03-18 09:51:59.527427277 +0000 UTC m=+3.486367222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.049850 master-0 kubenswrapper[3991]: E0318 09:52:20.049676 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6bb1b24b64b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:59.542507083 +0000 UTC m=+3.501446988,LastTimestamp:2026-03-18 09:51:59.542507083 +0000 UTC m=+3.501446988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.054815 master-0 kubenswrapper[3991]: E0318 09:52:20.054710 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6bb26fec05b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:59.741345883 +0000 UTC m=+3.700285818,LastTimestamp:2026-03-18 09:51:59.741345883 +0000 UTC m=+3.700285818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.059118 master-0 kubenswrapper[3991]: E0318 09:52:20.058993 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de6bb2c6276a5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:51:59.831766693 +0000 UTC m=+3.790706608,LastTimestamp:2026-03-18 09:51:59.831766693 +0000 UTC m=+3.790706608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.063769 master-0 kubenswrapper[3991]: E0318 09:52:20.063637 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de6bb3929ecbf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:00.046165183 +0000 UTC m=+4.005105088,LastTimestamp:2026-03-18 09:52:00.046165183 +0000 UTC m=+4.005105088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.069898 master-0 kubenswrapper[3991]: E0318 09:52:20.069756 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6bdb7985da8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" in 11.015s (11.015s including waiting). Image size: 465090934 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:10.757266856 +0000 UTC m=+14.716206791,LastTimestamp:2026-03-18 09:52:10.757266856 +0000 UTC m=+14.716206791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.076044 master-0 kubenswrapper[3991]: E0318 09:52:20.075807 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de6bdb7c3f0d5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" in 10.713s (10.713s including waiting). Image size: 529326739 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:10.760122581 +0000 UTC m=+14.719062486,LastTimestamp:2026-03-18 09:52:10.760122581 +0000 UTC m=+14.719062486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.083664 master-0 kubenswrapper[3991]: E0318 09:52:20.083509 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6bdbbde1af7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 11.286s (11.286s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:10.828946167 +0000 UTC m=+14.787886072,LastTimestamp:2026-03-18 09:52:10.828946167 +0000 UTC m=+14.787886072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.089366 master-0 kubenswrapper[3991]: E0318 09:52:20.089122 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6bdc7e98648 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.031021128 +0000 UTC m=+14.989961023,LastTimestamp:2026-03-18 09:52:11.031021128 +0000 UTC m=+14.989961023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.095981 master-0 kubenswrapper[3991]: E0318 09:52:20.095801 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de6bdc7ea832e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.03108587 +0000 UTC m=+14.990025775,LastTimestamp:2026-03-18 09:52:11.03108587 +0000 UTC m=+14.990025775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.102074 master-0 kubenswrapper[3991]: E0318 09:52:20.101953 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6bdc807dcdb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.033009371 +0000 UTC m=+14.991949266,LastTimestamp:2026-03-18 09:52:11.033009371 +0000 UTC m=+14.991949266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.120555 master-0 kubenswrapper[3991]: E0318 09:52:20.120358 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de6bdc844c420 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 11.509s (11.509s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.037000736 +0000 UTC m=+14.995940661,LastTimestamp:2026-03-18 09:52:11.037000736 +0000 UTC m=+14.995940661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.129620 master-0 kubenswrapper[3991]: E0318 09:52:20.129466 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de6bdcbe7fff2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.098030066 +0000 UTC m=+15.056969961,LastTimestamp:2026-03-18 09:52:11.098030066 +0000 UTC m=+15.056969961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.135591 master-0 kubenswrapper[3991]: E0318 09:52:20.135378 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de6bdcc11f7b5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.100780469 +0000 UTC m=+15.059720374,LastTimestamp:2026-03-18 09:52:11.100780469 +0000 UTC m=+15.059720374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.142110 master-0 kubenswrapper[3991]: E0318 09:52:20.141886 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6bdcc263310 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.102106384 +0000 UTC m=+15.061046279,LastTimestamp:2026-03-18 09:52:11.102106384 +0000 UTC m=+15.061046279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.149406 master-0 kubenswrapper[3991]: E0318 09:52:20.149208 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6bdcd20192c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.118483756 +0000 UTC m=+15.077423651,LastTimestamp:2026-03-18 09:52:11.118483756 +0000 UTC m=+15.077423651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.155767 master-0 kubenswrapper[3991]: E0318 09:52:20.155551 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6bde7f6575c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.568731996 +0000 UTC m=+15.527671931,LastTimestamp:2026-03-18 09:52:11.568731996 +0000 UTC m=+15.527671931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.160612 master-0 kubenswrapper[3991]: E0318 09:52:20.160475 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6bde7fab1d7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.569017303 +0000 UTC m=+15.527957238,LastTimestamp:2026-03-18 09:52:11.569017303 +0000 UTC m=+15.527957238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.165170 master-0 kubenswrapper[3991]: E0318 09:52:20.164976 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de6bdff51cd27 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.960601895 +0000 UTC m=+15.919541800,LastTimestamp:2026-03-18 09:52:11.960601895 +0000 UTC m=+15.919541800,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.169486 master-0 kubenswrapper[3991]: E0318 09:52:20.169306 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de6be004fd3a8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 12.145s (12.145s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.977249704 +0000 UTC m=+15.936189629,LastTimestamp:2026-03-18 09:52:11.977249704 +0000 UTC m=+15.936189629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.173884 master-0 kubenswrapper[3991]: E0318 09:52:20.173706 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de6be09db2427 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.137374759 +0000 UTC m=+16.096314664,LastTimestamp:2026-03-18 09:52:12.137374759 +0000 UTC m=+16.096314664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.177727 master-0 kubenswrapper[3991]: E0318 09:52:20.177531 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de6be0b2c0e1f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.159454751 +0000 UTC m=+16.118394656,LastTimestamp:2026-03-18 09:52:12.159454751 +0000 UTC m=+16.118394656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.182620 master-0 kubenswrapper[3991]: E0318 09:52:20.182473 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de6be1db5d3c3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.470473667 +0000 UTC m=+16.429413602,LastTimestamp:2026-03-18 09:52:12.470473667 +0000 UTC m=+16.429413602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.188565 master-0 kubenswrapper[3991]: E0318 09:52:20.188426 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de6be202f9dfa kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.512009722 +0000 UTC m=+16.470949647,LastTimestamp:2026-03-18 09:52:12.512009722 +0000 UTC m=+16.470949647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.195272 master-0 kubenswrapper[3991]: E0318 09:52:20.195085 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6be20770842 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.516689986 +0000 UTC m=+16.475629881,LastTimestamp:2026-03-18 09:52:12.516689986 +0000 UTC m=+16.475629881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.202082 master-0 kubenswrapper[3991]: E0318 09:52:20.201922 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6be236b57e0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.566255584 +0000 UTC m=+16.525195519,LastTimestamp:2026-03-18 09:52:12.566255584 +0000 UTC m=+16.525195519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.207256 master-0 kubenswrapper[3991]: E0318 09:52:20.207110 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de6be244f7334 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.581204788 +0000 UTC m=+16.540144723,LastTimestamp:2026-03-18 09:52:12.581204788 +0000 UTC m=+16.540144723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.212226 master-0 kubenswrapper[3991]: E0318 09:52:20.212090 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6be24504482 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.58125837 +0000 UTC m=+16.540198295,LastTimestamp:2026-03-18 09:52:12.58125837 +0000 UTC m=+16.540198295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.217634 master-0 kubenswrapper[3991]: E0318 09:52:20.217491 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de6be24766b7c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.583758716 +0000 UTC m=+16.542698651,LastTimestamp:2026-03-18 09:52:12.583758716 +0000 UTC m=+16.542698651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.222931 master-0 kubenswrapper[3991]: E0318 09:52:20.222722 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6be247ed176 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.58430911 +0000 UTC m=+16.543249035,LastTimestamp:2026-03-18 09:52:12.58430911 +0000 UTC m=+16.543249035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.227880 master-0 kubenswrapper[3991]: E0318 09:52:20.227684 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6be25bcbb48 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.60514388 +0000 UTC m=+16.564083805,LastTimestamp:2026-03-18 09:52:12.60514388 +0000 UTC m=+16.564083805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.233406 master-0 kubenswrapper[3991]: E0318 09:52:20.233305 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de6bde7fab1d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6bde7fab1d7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.569017303 +0000 UTC m=+15.527957238,LastTimestamp:2026-03-18 09:52:13.283407075 +0000 UTC m=+17.242347010,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.239034 master-0 kubenswrapper[3991]: E0318 09:52:20.238885 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de6be236b57e0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6be236b57e0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.566255584 +0000 UTC m=+16.525195519,LastTimestamp:2026-03-18 09:52:13.608586835 +0000 UTC m=+17.567526750,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.244511 master-0 kubenswrapper[3991]: E0318 09:52:20.244356 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de6be25bcbb48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6be25bcbb48 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.60514388 +0000 UTC m=+16.564083805,LastTimestamp:2026-03-18 09:52:13.751812724 +0000 UTC m=+17.710752619,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.249387 master-0 kubenswrapper[3991]: E0318 09:52:20.249274 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6be89ed0c75 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:14.286031989 +0000 UTC m=+18.244971874,LastTimestamp:2026-03-18 09:52:14.286031989 +0000 UTC m=+18.244971874,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.253967 master-0 kubenswrapper[3991]: E0318 09:52:20.253788 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de6be89ed0c75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6be89ed0c75 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:14.286031989 +0000 UTC m=+18.244971874,LastTimestamp:2026-03-18 09:52:15.291484697 +0000 UTC m=+19.250424592,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.258604 master-0 kubenswrapper[3991]: E0318 09:52:20.258502 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de6bef3c26f84 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\" in 3.477s (3.477s including waiting). Image size: 505246690 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:16.061624196 +0000 UTC m=+20.020564101,LastTimestamp:2026-03-18 09:52:16.061624196 +0000 UTC m=+20.020564101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.263617 master-0 kubenswrapper[3991]: E0318 09:52:20.263523 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6bef46a208a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" in 3.488s (3.488s including waiting). Image size: 514984269 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:16.072614026 +0000 UTC m=+20.031553931,LastTimestamp:2026-03-18 09:52:16.072614026 +0000 UTC m=+20.031553931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.268583 master-0 kubenswrapper[3991]: E0318 09:52:20.268492 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de6beff949f50 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:16.259948368 +0000 UTC m=+20.218888263,LastTimestamp:2026-03-18 09:52:16.259948368 +0000 UTC m=+20.218888263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.273586 master-0 kubenswrapper[3991]: E0318 09:52:20.273491 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6beff9509cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:16.259975629 +0000 UTC m=+20.218915524,LastTimestamp:2026-03-18 09:52:16.259975629 +0000 UTC m=+20.218915524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.278700 master-0 kubenswrapper[3991]: E0318 09:52:20.278509 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de6bf00529ddd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:16.272399837 +0000 UTC m=+20.231339782,LastTimestamp:2026-03-18 09:52:16.272399837 +0000 UTC m=+20.231339782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.283683 master-0 kubenswrapper[3991]: E0318 09:52:20.283499 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de6bf008f1f5f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:16.276365151 +0000 UTC m=+20.235305046,LastTimestamp:2026-03-18 09:52:16.276365151 +0000 UTC m=+20.235305046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:20.307945 master-0 kubenswrapper[3991]: I0318 09:52:20.307773 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:20.308939 master-0 kubenswrapper[3991]: I0318 09:52:20.308880 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:20.309047 master-0 kubenswrapper[3991]: I0318 09:52:20.308949 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:20.309047 master-0 kubenswrapper[3991]: I0318 09:52:20.308974 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:20.891682 master-0 kubenswrapper[3991]: I0318 09:52:20.891612 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:21.310015 master-0 kubenswrapper[3991]: I0318 09:52:21.309934 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:21.310985 master-0 kubenswrapper[3991]: I0318 09:52:21.310801 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:21.310985 master-0 kubenswrapper[3991]: I0318 09:52:21.310924 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:21.310985 master-0 kubenswrapper[3991]: I0318 09:52:21.310955 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:21.889106 master-0 kubenswrapper[3991]: I0318 09:52:21.888896 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:22.892245 master-0 kubenswrapper[3991]: I0318 09:52:22.892141 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:23.529691 master-0 kubenswrapper[3991]: E0318 09:52:23.529597 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 09:52:23.874968 master-0 kubenswrapper[3991]: I0318 09:52:23.874698 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:23.876354 master-0 kubenswrapper[3991]: I0318 09:52:23.876317 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:23.876431 master-0 kubenswrapper[3991]: I0318 09:52:23.876418 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:23.876462 master-0 kubenswrapper[3991]: I0318 09:52:23.876441 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:23.876524 master-0 kubenswrapper[3991]: I0318 09:52:23.876501 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:52:23.887659 master-0 kubenswrapper[3991]: E0318 09:52:23.887592 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 09:52:23.887751 master-0 kubenswrapper[3991]: I0318 09:52:23.887711 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:24.883450 master-0 kubenswrapper[3991]: I0318 09:52:24.883369 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:25.521390 master-0 kubenswrapper[3991]: I0318 09:52:25.521335 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:52:25.521948 master-0 kubenswrapper[3991]: I0318 09:52:25.521917 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:25.526683 master-0 kubenswrapper[3991]: I0318 09:52:25.526644 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:25.526960 master-0 kubenswrapper[3991]: I0318 09:52:25.526932 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:25.527146 master-0 kubenswrapper[3991]: I0318 09:52:25.527122 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:25.529006 master-0 kubenswrapper[3991]: I0318 09:52:25.528948 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:52:25.591029 master-0 kubenswrapper[3991]: I0318 09:52:25.590935 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:52:25.598277 master-0 kubenswrapper[3991]: I0318 09:52:25.598203 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:52:25.890118 master-0 kubenswrapper[3991]: I0318 09:52:25.890062 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:26.323007 master-0 kubenswrapper[3991]: I0318 09:52:26.322911 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:26.324733 master-0 kubenswrapper[3991]: I0318 09:52:26.324671 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:26.324733 master-0 kubenswrapper[3991]: I0318 09:52:26.324730 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:26.324998 master-0 kubenswrapper[3991]: I0318 09:52:26.324763 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:26.331013 master-0 kubenswrapper[3991]: I0318 09:52:26.330950 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:52:26.888479 master-0 kubenswrapper[3991]: I0318 09:52:26.888396 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:27.143722 master-0 kubenswrapper[3991]: E0318 09:52:27.143554 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 09:52:27.325785 master-0 kubenswrapper[3991]: I0318 09:52:27.325718 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:27.326820 master-0 kubenswrapper[3991]: I0318 09:52:27.326765 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:27.326820 master-0 kubenswrapper[3991]: I0318 09:52:27.326817 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:27.327095 master-0 kubenswrapper[3991]: I0318 09:52:27.326863 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:27.890585 master-0 kubenswrapper[3991]: I0318 09:52:27.890513 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:28.186215 master-0 kubenswrapper[3991]: I0318 09:52:28.186013 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 09:52:28.206077 master-0 kubenswrapper[3991]: I0318 09:52:28.205994 3991 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 09:52:28.328576 master-0 kubenswrapper[3991]: I0318 09:52:28.328489 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:28.329762 master-0 kubenswrapper[3991]: I0318 09:52:28.329700 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:28.329816 master-0 kubenswrapper[3991]: I0318 09:52:28.329787 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:28.329816 master-0 kubenswrapper[3991]: I0318 09:52:28.329808 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:28.889051 master-0 kubenswrapper[3991]: I0318 09:52:28.888977 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:29.891325 master-0 kubenswrapper[3991]: I0318 09:52:29.891225 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:30.150183 master-0 kubenswrapper[3991]: I0318 09:52:30.149952 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:30.151784 master-0 kubenswrapper[3991]: I0318 09:52:30.151603 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:30.151784 master-0 kubenswrapper[3991]: I0318 09:52:30.151675 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:30.151784 master-0 kubenswrapper[3991]: I0318 09:52:30.151698 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:30.152386 master-0 kubenswrapper[3991]: I0318 09:52:30.152334 3991 scope.go:117] "RemoveContainer" containerID="90da8b24f7ea8d20a3716f3037b813048757e01bbb908d8dd97ca491e4848ef7" Mar 18 09:52:30.161318 master-0 kubenswrapper[3991]: E0318 09:52:30.161150 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de6bde7fab1d7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6bde7fab1d7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:11.569017303 +0000 UTC m=+15.527957238,LastTimestamp:2026-03-18 09:52:30.153941989 +0000 UTC m=+34.112881924,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:30.388485 master-0 kubenswrapper[3991]: E0318 09:52:30.388306 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de6be236b57e0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6be236b57e0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.566255584 +0000 UTC m=+16.525195519,LastTimestamp:2026-03-18 09:52:30.382443728 +0000 UTC m=+34.341383643,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:30.414113 master-0 kubenswrapper[3991]: E0318 09:52:30.413852 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de6be25bcbb48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6be25bcbb48 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:12.60514388 +0000 UTC m=+16.564083805,LastTimestamp:2026-03-18 09:52:30.407910169 +0000 UTC m=+34.366850064,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:30.538603 master-0 kubenswrapper[3991]: E0318 09:52:30.538486 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 09:52:30.887877 master-0 kubenswrapper[3991]: I0318 09:52:30.887755 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:30.889343 master-0 kubenswrapper[3991]: I0318 09:52:30.889229 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:30.889343 master-0 kubenswrapper[3991]: I0318 09:52:30.889333 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:30.889540 master-0 kubenswrapper[3991]: I0318 09:52:30.889359 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:30.889540 master-0 kubenswrapper[3991]: I0318 09:52:30.889350 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:30.889540 master-0 kubenswrapper[3991]: I0318 09:52:30.889466 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:52:30.897255 master-0 kubenswrapper[3991]: E0318 09:52:30.897184 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 09:52:31.338759 master-0 kubenswrapper[3991]: I0318 09:52:31.338665 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 09:52:31.339436 master-0 kubenswrapper[3991]: I0318 09:52:31.339387 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 09:52:31.340046 master-0 kubenswrapper[3991]: I0318 09:52:31.339983 3991 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b" exitCode=1 Mar 18 09:52:31.340131 master-0 kubenswrapper[3991]: I0318 09:52:31.340058 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b"} Mar 18 09:52:31.340131 master-0 kubenswrapper[3991]: I0318 09:52:31.340120 3991 scope.go:117] "RemoveContainer" containerID="90da8b24f7ea8d20a3716f3037b813048757e01bbb908d8dd97ca491e4848ef7" Mar 18 09:52:31.340246 master-0 kubenswrapper[3991]: I0318 09:52:31.340221 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:31.341299 master-0 kubenswrapper[3991]: I0318 09:52:31.341230 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:31.341299 master-0 kubenswrapper[3991]: I0318 09:52:31.341294 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:31.341445 master-0 kubenswrapper[3991]: I0318 09:52:31.341309 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:31.342281 master-0 kubenswrapper[3991]: I0318 09:52:31.341864 3991 scope.go:117] "RemoveContainer" containerID="d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b" Mar 18 09:52:31.342281 master-0 kubenswrapper[3991]: E0318 09:52:31.342118 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 09:52:31.349330 master-0 kubenswrapper[3991]: E0318 09:52:31.349175 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de6be89ed0c75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de6be89ed0c75 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:52:14.286031989 +0000 UTC m=+18.244971874,LastTimestamp:2026-03-18 09:52:31.342067536 +0000 UTC m=+35.301007441,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:52:31.889896 master-0 kubenswrapper[3991]: I0318 09:52:31.889809 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:32.344123 master-0 kubenswrapper[3991]: I0318 09:52:32.344064 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 09:52:32.889530 master-0 kubenswrapper[3991]: I0318 09:52:32.889424 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 09:52:33.287042 master-0 kubenswrapper[3991]: I0318 09:52:33.286986 3991 csr.go:261] certificate signing request csr-smg9j is approved, waiting to be issued Mar 18 09:52:33.846168 master-0 kubenswrapper[3991]: I0318 09:52:33.846089 3991 csr.go:257] certificate signing request csr-smg9j is issued Mar 18 09:52:33.891361 master-0 kubenswrapper[3991]: I0318 09:52:33.891306 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:33.906678 master-0 kubenswrapper[3991]: I0318 09:52:33.906619 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:33.967752 master-0 kubenswrapper[3991]: I0318 09:52:33.967685 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:34.238988 master-0 kubenswrapper[3991]: I0318 09:52:34.238807 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:34.238988 master-0 kubenswrapper[3991]: E0318 09:52:34.238879 3991 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 09:52:34.259793 master-0 kubenswrapper[3991]: I0318 09:52:34.259735 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:34.274557 master-0 kubenswrapper[3991]: I0318 09:52:34.274499 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:34.332090 master-0 kubenswrapper[3991]: I0318 09:52:34.332001 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:34.588635 master-0 kubenswrapper[3991]: I0318 09:52:34.588542 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:34.588635 master-0 kubenswrapper[3991]: E0318 09:52:34.588608 3991 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 09:52:34.689159 master-0 kubenswrapper[3991]: I0318 09:52:34.689095 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:34.706338 master-0 kubenswrapper[3991]: I0318 09:52:34.706272 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:34.716708 master-0 kubenswrapper[3991]: I0318 09:52:34.716643 3991 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 18 09:52:34.775604 master-0 kubenswrapper[3991]: I0318 09:52:34.775520 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:34.848388 master-0 kubenswrapper[3991]: I0318 09:52:34.848199 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 09:43:17 +0000 UTC, rotation deadline is 2026-03-19 04:32:33.026501991 +0000 UTC Mar 18 09:52:34.848388 master-0 kubenswrapper[3991]: I0318 09:52:34.848265 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h39m58.178241667s for next certificate rotation Mar 18 09:52:35.048582 master-0 kubenswrapper[3991]: I0318 09:52:35.048533 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:35.048582 master-0 kubenswrapper[3991]: E0318 09:52:35.048580 3991 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 09:52:35.617640 master-0 kubenswrapper[3991]: I0318 09:52:35.617564 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:35.635381 master-0 kubenswrapper[3991]: I0318 09:52:35.635318 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:35.696457 master-0 kubenswrapper[3991]: I0318 09:52:35.696412 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:35.956728 master-0 kubenswrapper[3991]: I0318 09:52:35.956626 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 09:52:35.956728 master-0 kubenswrapper[3991]: E0318 09:52:35.956671 3991 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 09:52:36.837916 master-0 kubenswrapper[3991]: I0318 09:52:36.837820 3991 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 09:52:37.144202 master-0 kubenswrapper[3991]: E0318 09:52:37.144014 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 09:52:37.545652 master-0 kubenswrapper[3991]: E0318 09:52:37.545570 3991 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 18 09:52:37.897664 master-0 kubenswrapper[3991]: I0318 09:52:37.897475 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:52:37.899067 master-0 kubenswrapper[3991]: I0318 09:52:37.898978 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:52:37.899067 master-0 kubenswrapper[3991]: I0318 09:52:37.899051 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:52:37.899067 master-0 kubenswrapper[3991]: I0318 09:52:37.899071 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:52:37.899339 master-0 kubenswrapper[3991]: I0318 09:52:37.899130 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:52:38.216840 master-0 kubenswrapper[3991]: I0318 09:52:38.216685 3991 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 09:52:38.216840 master-0 kubenswrapper[3991]: E0318 09:52:38.216744 3991 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 09:52:38.970611 master-0 kubenswrapper[3991]: E0318 09:52:38.967776 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:52:39.068650 master-0 kubenswrapper[3991]: E0318 09:52:39.068572 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:52:39.169621 master-0 kubenswrapper[3991]: E0318 09:52:39.169551 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:52:39.270660 master-0 kubenswrapper[3991]: E0318 09:52:39.270500 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:52:39.371212 master-0 kubenswrapper[3991]: E0318 09:52:39.371142 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:52:39.472215 master-0 kubenswrapper[3991]: E0318 09:52:39.471879 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:52:39.508475 master-0 kubenswrapper[3991]: I0318 09:52:39.508431 3991 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 09:52:39.897447 master-0 kubenswrapper[3991]: I0318 09:52:39.897384 3991 apiserver.go:52] "Watching apiserver" Mar 18 09:52:39.952470 master-0 kubenswrapper[3991]: I0318 09:52:39.951817 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 18 09:52:40.210780 master-0 kubenswrapper[3991]: I0318 09:52:40.210616 3991 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 09:52:40.211217 master-0 kubenswrapper[3991]: I0318 09:52:40.210794 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr"] Mar 18 09:52:40.211316 master-0 kubenswrapper[3991]: I0318 09:52:40.211279 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.214223 master-0 kubenswrapper[3991]: I0318 09:52:40.214163 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 09:52:40.214341 master-0 kubenswrapper[3991]: I0318 09:52:40.214172 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 09:52:40.214754 master-0 kubenswrapper[3991]: I0318 09:52:40.214719 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 09:52:40.292226 master-0 kubenswrapper[3991]: I0318 09:52:40.292109 3991 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 09:52:40.312043 master-0 kubenswrapper[3991]: I0318 09:52:40.311938 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.312043 master-0 kubenswrapper[3991]: I0318 09:52:40.312027 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15f8941b-dba2-40ba-86d5-3318f5b635cc-service-ca\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.312352 master-0 kubenswrapper[3991]: I0318 09:52:40.312069 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.312352 master-0 kubenswrapper[3991]: I0318 09:52:40.312107 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.312352 master-0 kubenswrapper[3991]: I0318 09:52:40.312152 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15f8941b-dba2-40ba-86d5-3318f5b635cc-kube-api-access\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.412861 master-0 kubenswrapper[3991]: I0318 09:52:40.412700 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.413155 master-0 kubenswrapper[3991]: I0318 09:52:40.412926 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.413155 master-0 kubenswrapper[3991]: I0318 09:52:40.413132 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15f8941b-dba2-40ba-86d5-3318f5b635cc-service-ca\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.413305 master-0 kubenswrapper[3991]: I0318 09:52:40.413207 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.413376 master-0 kubenswrapper[3991]: I0318 09:52:40.413303 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.413537 master-0 kubenswrapper[3991]: I0318 09:52:40.413448 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.413701 master-0 kubenswrapper[3991]: E0318 09:52:40.413638 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:40.413789 master-0 kubenswrapper[3991]: I0318 09:52:40.413647 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15f8941b-dba2-40ba-86d5-3318f5b635cc-kube-api-access\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.413972 master-0 kubenswrapper[3991]: E0318 09:52:40.413931 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:52:40.913781859 +0000 UTC m=+44.872721784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:40.415136 master-0 kubenswrapper[3991]: I0318 09:52:40.415077 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15f8941b-dba2-40ba-86d5-3318f5b635cc-service-ca\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.918293 master-0 kubenswrapper[3991]: I0318 09:52:40.918176 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:40.918579 master-0 kubenswrapper[3991]: E0318 09:52:40.918383 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:40.918579 master-0 kubenswrapper[3991]: E0318 09:52:40.918478 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:52:41.918447803 +0000 UTC m=+45.877387738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:41.925866 master-0 kubenswrapper[3991]: I0318 09:52:41.925704 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:41.926983 master-0 kubenswrapper[3991]: E0318 09:52:41.925959 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:41.926983 master-0 kubenswrapper[3991]: E0318 09:52:41.926080 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:52:43.926044259 +0000 UTC m=+47.884984194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:42.046631 master-0 kubenswrapper[3991]: I0318 09:52:42.044953 3991 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 09:52:42.080444 master-0 kubenswrapper[3991]: I0318 09:52:42.079302 3991 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 09:52:42.091615 master-0 kubenswrapper[3991]: I0318 09:52:42.091556 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15f8941b-dba2-40ba-86d5-3318f5b635cc-kube-api-access\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:42.681971 master-0 kubenswrapper[3991]: I0318 09:52:42.681322 3991 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 09:52:42.815345 master-0 kubenswrapper[3991]: I0318 09:52:42.815277 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7bd846bfc4-8srnz"] Mar 18 09:52:42.815628 master-0 kubenswrapper[3991]: I0318 09:52:42.815593 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:42.818761 master-0 kubenswrapper[3991]: I0318 09:52:42.818710 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 09:52:42.819139 master-0 kubenswrapper[3991]: I0318 09:52:42.819103 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 09:52:42.819471 master-0 kubenswrapper[3991]: I0318 09:52:42.819434 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 09:52:42.942690 master-0 kubenswrapper[3991]: I0318 09:52:42.942495 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9ccdc221-4ec5-487e-8ec4-85284ed628d8-host-etc-kube\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:42.942690 master-0 kubenswrapper[3991]: I0318 09:52:42.942579 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccdc221-4ec5-487e-8ec4-85284ed628d8-metrics-tls\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:42.942690 master-0 kubenswrapper[3991]: I0318 09:52:42.942614 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghd2r\" (UniqueName: \"kubernetes.io/projected/9ccdc221-4ec5-487e-8ec4-85284ed628d8-kube-api-access-ghd2r\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:42.972297 master-0 kubenswrapper[3991]: I0318 09:52:42.972228 3991 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 09:52:43.046922 master-0 kubenswrapper[3991]: I0318 09:52:43.043241 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9ccdc221-4ec5-487e-8ec4-85284ed628d8-host-etc-kube\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:43.046922 master-0 kubenswrapper[3991]: I0318 09:52:43.043343 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccdc221-4ec5-487e-8ec4-85284ed628d8-metrics-tls\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:43.046922 master-0 kubenswrapper[3991]: I0318 09:52:43.043377 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghd2r\" (UniqueName: \"kubernetes.io/projected/9ccdc221-4ec5-487e-8ec4-85284ed628d8-kube-api-access-ghd2r\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:43.046922 master-0 kubenswrapper[3991]: I0318 09:52:43.043924 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9ccdc221-4ec5-487e-8ec4-85284ed628d8-host-etc-kube\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:43.047733 master-0 kubenswrapper[3991]: I0318 09:52:43.047673 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccdc221-4ec5-487e-8ec4-85284ed628d8-metrics-tls\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:43.186954 master-0 kubenswrapper[3991]: I0318 09:52:43.186807 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghd2r\" (UniqueName: \"kubernetes.io/projected/9ccdc221-4ec5-487e-8ec4-85284ed628d8-kube-api-access-ghd2r\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:43.332804 master-0 kubenswrapper[3991]: I0318 09:52:43.332691 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-ttq68"] Mar 18 09:52:43.333204 master-0 kubenswrapper[3991]: I0318 09:52:43.333084 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.335498 master-0 kubenswrapper[3991]: I0318 09:52:43.335414 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 18 09:52:43.335575 master-0 kubenswrapper[3991]: I0318 09:52:43.335516 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 18 09:52:43.336220 master-0 kubenswrapper[3991]: I0318 09:52:43.335699 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 18 09:52:43.337763 master-0 kubenswrapper[3991]: I0318 09:52:43.337720 3991 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 18 09:52:43.442973 master-0 kubenswrapper[3991]: I0318 09:52:43.442888 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:52:43.462874 master-0 kubenswrapper[3991]: W0318 09:52:43.462732 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ccdc221_4ec5_487e_8ec4_85284ed628d8.slice/crio-6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a WatchSource:0}: Error finding container 6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a: Status 404 returned error can't find the container with id 6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a Mar 18 09:52:43.480200 master-0 kubenswrapper[3991]: I0318 09:52:43.480128 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-ca-bundle\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.480406 master-0 kubenswrapper[3991]: I0318 09:52:43.480228 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-sno-bootstrap-files\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.480520 master-0 kubenswrapper[3991]: I0318 09:52:43.480441 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.480520 master-0 kubenswrapper[3991]: I0318 09:52:43.480502 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-resolv-conf\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.480724 master-0 kubenswrapper[3991]: I0318 09:52:43.480550 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpf2\" (UniqueName: \"kubernetes.io/projected/2cda3479-c3ed-4d79-bbd3-888e64b328f7-kube-api-access-wcpf2\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.581292 master-0 kubenswrapper[3991]: I0318 09:52:43.581232 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-ca-bundle\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.581491 master-0 kubenswrapper[3991]: I0318 09:52:43.581309 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-sno-bootstrap-files\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.581491 master-0 kubenswrapper[3991]: I0318 09:52:43.581346 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.581491 master-0 kubenswrapper[3991]: I0318 09:52:43.581383 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-resolv-conf\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.581491 master-0 kubenswrapper[3991]: I0318 09:52:43.581418 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcpf2\" (UniqueName: \"kubernetes.io/projected/2cda3479-c3ed-4d79-bbd3-888e64b328f7-kube-api-access-wcpf2\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.581982 master-0 kubenswrapper[3991]: I0318 09:52:43.581943 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-sno-bootstrap-files\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.582181 master-0 kubenswrapper[3991]: I0318 09:52:43.582116 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-ca-bundle\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.582239 master-0 kubenswrapper[3991]: I0318 09:52:43.582202 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-resolv-conf\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.582297 master-0 kubenswrapper[3991]: I0318 09:52:43.582268 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.599049 master-0 kubenswrapper[3991]: I0318 09:52:43.598955 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcpf2\" (UniqueName: \"kubernetes.io/projected/2cda3479-c3ed-4d79-bbd3-888e64b328f7-kube-api-access-wcpf2\") pod \"assisted-installer-controller-ttq68\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.655399 master-0 kubenswrapper[3991]: I0318 09:52:43.655296 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:43.672392 master-0 kubenswrapper[3991]: W0318 09:52:43.672313 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cda3479_c3ed_4d79_bbd3_888e64b328f7.slice/crio-c985fd1643f6c6fd8181176e1149d324515647d1a390abe33081b9ded6959a0f WatchSource:0}: Error finding container c985fd1643f6c6fd8181176e1149d324515647d1a390abe33081b9ded6959a0f: Status 404 returned error can't find the container with id c985fd1643f6c6fd8181176e1149d324515647d1a390abe33081b9ded6959a0f Mar 18 09:52:43.827475 master-0 kubenswrapper[3991]: I0318 09:52:43.827379 3991 csr.go:261] certificate signing request csr-2qmdb is approved, waiting to be issued Mar 18 09:52:43.839682 master-0 kubenswrapper[3991]: I0318 09:52:43.839638 3991 csr.go:257] certificate signing request csr-2qmdb is issued Mar 18 09:52:43.984854 master-0 kubenswrapper[3991]: I0318 09:52:43.984625 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:43.985888 master-0 kubenswrapper[3991]: E0318 09:52:43.984859 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:43.985888 master-0 kubenswrapper[3991]: E0318 09:52:43.984955 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:52:47.984931972 +0000 UTC m=+51.943871867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:44.372669 master-0 kubenswrapper[3991]: I0318 09:52:44.372579 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-ttq68" event={"ID":"2cda3479-c3ed-4d79-bbd3-888e64b328f7","Type":"ContainerStarted","Data":"c985fd1643f6c6fd8181176e1149d324515647d1a390abe33081b9ded6959a0f"} Mar 18 09:52:44.373958 master-0 kubenswrapper[3991]: I0318 09:52:44.373902 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" event={"ID":"9ccdc221-4ec5-487e-8ec4-85284ed628d8","Type":"ContainerStarted","Data":"6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a"} Mar 18 09:52:44.842229 master-0 kubenswrapper[3991]: I0318 09:52:44.842131 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 09:43:17 +0000 UTC, rotation deadline is 2026-03-19 06:52:04.5898836 +0000 UTC Mar 18 09:52:44.842229 master-0 kubenswrapper[3991]: I0318 09:52:44.842206 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h59m19.74768023s for next certificate rotation Mar 18 09:52:45.844438 master-0 kubenswrapper[3991]: I0318 09:52:45.843436 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 09:43:17 +0000 UTC, rotation deadline is 2026-03-19 04:41:28.131457199 +0000 UTC Mar 18 09:52:45.844438 master-0 kubenswrapper[3991]: I0318 09:52:45.843517 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h48m42.28794605s for next certificate rotation Mar 18 09:52:46.206378 master-0 kubenswrapper[3991]: I0318 09:52:46.206255 3991 scope.go:117] "RemoveContainer" containerID="d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b" Mar 18 09:52:46.206606 master-0 kubenswrapper[3991]: E0318 09:52:46.206449 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 09:52:46.206606 master-0 kubenswrapper[3991]: I0318 09:52:46.206533 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 18 09:52:46.378440 master-0 kubenswrapper[3991]: I0318 09:52:46.378362 3991 scope.go:117] "RemoveContainer" containerID="d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b" Mar 18 09:52:46.378687 master-0 kubenswrapper[3991]: E0318 09:52:46.378635 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 09:52:48.011567 master-0 kubenswrapper[3991]: I0318 09:52:48.011521 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:48.012331 master-0 kubenswrapper[3991]: E0318 09:52:48.011645 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:48.012331 master-0 kubenswrapper[3991]: E0318 09:52:48.011713 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:52:56.011690845 +0000 UTC m=+59.970630740 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:51.389683 master-0 kubenswrapper[3991]: I0318 09:52:51.389601 3991 generic.go:334] "Generic (PLEG): container finished" podID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerID="d957302f7adb981277fbf539c8fb8ba8b510cdf036ae3b42bb11275306e467ec" exitCode=0 Mar 18 09:52:51.389683 master-0 kubenswrapper[3991]: I0318 09:52:51.389653 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-ttq68" event={"ID":"2cda3479-c3ed-4d79-bbd3-888e64b328f7","Type":"ContainerDied","Data":"d957302f7adb981277fbf539c8fb8ba8b510cdf036ae3b42bb11275306e467ec"} Mar 18 09:52:51.390615 master-0 kubenswrapper[3991]: I0318 09:52:51.390567 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" event={"ID":"9ccdc221-4ec5-487e-8ec4-85284ed628d8","Type":"ContainerStarted","Data":"809e75633cdef66e6f08501f6041dd63595d2c3bfee4b8663f566a1c8682596e"} Mar 18 09:52:51.417751 master-0 kubenswrapper[3991]: I0318 09:52:51.417633 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" podStartSLOduration=3.447915858 podStartE2EDuration="10.417605468s" podCreationTimestamp="2026-03-18 09:52:41 +0000 UTC" firstStartedPulling="2026-03-18 09:52:43.466061292 +0000 UTC m=+47.425001217" lastFinishedPulling="2026-03-18 09:52:50.435750892 +0000 UTC m=+54.394690827" observedRunningTime="2026-03-18 09:52:51.416159549 +0000 UTC m=+55.375099454" watchObservedRunningTime="2026-03-18 09:52:51.417605468 +0000 UTC m=+55.376545403" Mar 18 09:52:52.428519 master-0 kubenswrapper[3991]: I0318 09:52:52.428433 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:52.547490 master-0 kubenswrapper[3991]: I0318 09:52:52.547420 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcpf2\" (UniqueName: \"kubernetes.io/projected/2cda3479-c3ed-4d79-bbd3-888e64b328f7-kube-api-access-wcpf2\") pod \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " Mar 18 09:52:52.547490 master-0 kubenswrapper[3991]: I0318 09:52:52.547498 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-sno-bootstrap-files\") pod \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " Mar 18 09:52:52.547811 master-0 kubenswrapper[3991]: I0318 09:52:52.547531 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-resolv-conf\") pod \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " Mar 18 09:52:52.547811 master-0 kubenswrapper[3991]: I0318 09:52:52.547561 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-ca-bundle\") pod \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " Mar 18 09:52:52.547811 master-0 kubenswrapper[3991]: I0318 09:52:52.547591 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-var-run-resolv-conf\") pod \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\" (UID: \"2cda3479-c3ed-4d79-bbd3-888e64b328f7\") " Mar 18 09:52:52.547811 master-0 kubenswrapper[3991]: I0318 09:52:52.547658 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "2cda3479-c3ed-4d79-bbd3-888e64b328f7" (UID: "2cda3479-c3ed-4d79-bbd3-888e64b328f7"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:52:52.547811 master-0 kubenswrapper[3991]: I0318 09:52:52.547729 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "2cda3479-c3ed-4d79-bbd3-888e64b328f7" (UID: "2cda3479-c3ed-4d79-bbd3-888e64b328f7"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:52:52.547811 master-0 kubenswrapper[3991]: I0318 09:52:52.547743 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "2cda3479-c3ed-4d79-bbd3-888e64b328f7" (UID: "2cda3479-c3ed-4d79-bbd3-888e64b328f7"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:52:52.547811 master-0 kubenswrapper[3991]: I0318 09:52:52.547788 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "2cda3479-c3ed-4d79-bbd3-888e64b328f7" (UID: "2cda3479-c3ed-4d79-bbd3-888e64b328f7"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:52:52.548247 master-0 kubenswrapper[3991]: I0318 09:52:52.547914 3991 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 18 09:52:52.548247 master-0 kubenswrapper[3991]: I0318 09:52:52.547936 3991 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 09:52:52.548247 master-0 kubenswrapper[3991]: I0318 09:52:52.547957 3991 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:52:52.548247 master-0 kubenswrapper[3991]: I0318 09:52:52.547975 3991 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2cda3479-c3ed-4d79-bbd3-888e64b328f7-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 09:52:52.552939 master-0 kubenswrapper[3991]: I0318 09:52:52.552858 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cda3479-c3ed-4d79-bbd3-888e64b328f7-kube-api-access-wcpf2" (OuterVolumeSpecName: "kube-api-access-wcpf2") pod "2cda3479-c3ed-4d79-bbd3-888e64b328f7" (UID: "2cda3479-c3ed-4d79-bbd3-888e64b328f7"). InnerVolumeSpecName "kube-api-access-wcpf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:52:52.649111 master-0 kubenswrapper[3991]: I0318 09:52:52.648957 3991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcpf2\" (UniqueName: \"kubernetes.io/projected/2cda3479-c3ed-4d79-bbd3-888e64b328f7-kube-api-access-wcpf2\") on node \"master-0\" DevicePath \"\"" Mar 18 09:52:53.397845 master-0 kubenswrapper[3991]: I0318 09:52:53.397746 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-ttq68" event={"ID":"2cda3479-c3ed-4d79-bbd3-888e64b328f7","Type":"ContainerDied","Data":"c985fd1643f6c6fd8181176e1149d324515647d1a390abe33081b9ded6959a0f"} Mar 18 09:52:53.397845 master-0 kubenswrapper[3991]: I0318 09:52:53.397795 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:52:53.398416 master-0 kubenswrapper[3991]: I0318 09:52:53.397860 3991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c985fd1643f6c6fd8181176e1149d324515647d1a390abe33081b9ded6959a0f" Mar 18 09:52:53.623574 master-0 kubenswrapper[3991]: I0318 09:52:53.623465 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-zzk2p"] Mar 18 09:52:53.624550 master-0 kubenswrapper[3991]: E0318 09:52:53.623599 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerName="assisted-installer-controller" Mar 18 09:52:53.624550 master-0 kubenswrapper[3991]: I0318 09:52:53.623626 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerName="assisted-installer-controller" Mar 18 09:52:53.624550 master-0 kubenswrapper[3991]: I0318 09:52:53.623675 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerName="assisted-installer-controller" Mar 18 09:52:53.624550 master-0 kubenswrapper[3991]: I0318 09:52:53.623997 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zzk2p" Mar 18 09:52:53.756740 master-0 kubenswrapper[3991]: I0318 09:52:53.756692 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck2ns\" (UniqueName: \"kubernetes.io/projected/3796179a-f6c1-4f97-a2e1-d32106a5d8e9-kube-api-access-ck2ns\") pod \"mtu-prober-zzk2p\" (UID: \"3796179a-f6c1-4f97-a2e1-d32106a5d8e9\") " pod="openshift-network-operator/mtu-prober-zzk2p" Mar 18 09:52:53.857293 master-0 kubenswrapper[3991]: I0318 09:52:53.857188 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck2ns\" (UniqueName: \"kubernetes.io/projected/3796179a-f6c1-4f97-a2e1-d32106a5d8e9-kube-api-access-ck2ns\") pod \"mtu-prober-zzk2p\" (UID: \"3796179a-f6c1-4f97-a2e1-d32106a5d8e9\") " pod="openshift-network-operator/mtu-prober-zzk2p" Mar 18 09:52:53.887075 master-0 kubenswrapper[3991]: I0318 09:52:53.886923 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck2ns\" (UniqueName: \"kubernetes.io/projected/3796179a-f6c1-4f97-a2e1-d32106a5d8e9-kube-api-access-ck2ns\") pod \"mtu-prober-zzk2p\" (UID: \"3796179a-f6c1-4f97-a2e1-d32106a5d8e9\") " pod="openshift-network-operator/mtu-prober-zzk2p" Mar 18 09:52:53.940302 master-0 kubenswrapper[3991]: I0318 09:52:53.940202 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zzk2p" Mar 18 09:52:53.953960 master-0 kubenswrapper[3991]: W0318 09:52:53.953900 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3796179a_f6c1_4f97_a2e1_d32106a5d8e9.slice/crio-4da9f8c70f1716c5e032f09a6a5017ac3987811ec91a138b7a837bbb86e4f381 WatchSource:0}: Error finding container 4da9f8c70f1716c5e032f09a6a5017ac3987811ec91a138b7a837bbb86e4f381: Status 404 returned error can't find the container with id 4da9f8c70f1716c5e032f09a6a5017ac3987811ec91a138b7a837bbb86e4f381 Mar 18 09:52:54.402939 master-0 kubenswrapper[3991]: I0318 09:52:54.402761 3991 generic.go:334] "Generic (PLEG): container finished" podID="3796179a-f6c1-4f97-a2e1-d32106a5d8e9" containerID="bd008f41fdcd1da5525afb4e170a05e1a1f3c337467181cdcfc21b203b5549da" exitCode=0 Mar 18 09:52:54.402939 master-0 kubenswrapper[3991]: I0318 09:52:54.402852 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-zzk2p" event={"ID":"3796179a-f6c1-4f97-a2e1-d32106a5d8e9","Type":"ContainerDied","Data":"bd008f41fdcd1da5525afb4e170a05e1a1f3c337467181cdcfc21b203b5549da"} Mar 18 09:52:54.402939 master-0 kubenswrapper[3991]: I0318 09:52:54.402913 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-zzk2p" event={"ID":"3796179a-f6c1-4f97-a2e1-d32106a5d8e9","Type":"ContainerStarted","Data":"4da9f8c70f1716c5e032f09a6a5017ac3987811ec91a138b7a837bbb86e4f381"} Mar 18 09:52:55.431430 master-0 kubenswrapper[3991]: I0318 09:52:55.431351 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zzk2p" Mar 18 09:52:55.568759 master-0 kubenswrapper[3991]: I0318 09:52:55.568683 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck2ns\" (UniqueName: \"kubernetes.io/projected/3796179a-f6c1-4f97-a2e1-d32106a5d8e9-kube-api-access-ck2ns\") pod \"3796179a-f6c1-4f97-a2e1-d32106a5d8e9\" (UID: \"3796179a-f6c1-4f97-a2e1-d32106a5d8e9\") " Mar 18 09:52:55.574858 master-0 kubenswrapper[3991]: I0318 09:52:55.574756 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3796179a-f6c1-4f97-a2e1-d32106a5d8e9-kube-api-access-ck2ns" (OuterVolumeSpecName: "kube-api-access-ck2ns") pod "3796179a-f6c1-4f97-a2e1-d32106a5d8e9" (UID: "3796179a-f6c1-4f97-a2e1-d32106a5d8e9"). InnerVolumeSpecName "kube-api-access-ck2ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:52:55.670081 master-0 kubenswrapper[3991]: I0318 09:52:55.669915 3991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck2ns\" (UniqueName: \"kubernetes.io/projected/3796179a-f6c1-4f97-a2e1-d32106a5d8e9-kube-api-access-ck2ns\") on node \"master-0\" DevicePath \"\"" Mar 18 09:52:56.074269 master-0 kubenswrapper[3991]: I0318 09:52:56.074179 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:52:56.074492 master-0 kubenswrapper[3991]: E0318 09:52:56.074358 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:56.074492 master-0 kubenswrapper[3991]: E0318 09:52:56.074483 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:53:12.074455144 +0000 UTC m=+76.033395079 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:52:56.410238 master-0 kubenswrapper[3991]: I0318 09:52:56.410005 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-zzk2p" event={"ID":"3796179a-f6c1-4f97-a2e1-d32106a5d8e9","Type":"ContainerDied","Data":"4da9f8c70f1716c5e032f09a6a5017ac3987811ec91a138b7a837bbb86e4f381"} Mar 18 09:52:56.410238 master-0 kubenswrapper[3991]: I0318 09:52:56.410090 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zzk2p" Mar 18 09:52:56.410238 master-0 kubenswrapper[3991]: I0318 09:52:56.410111 3991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4da9f8c70f1716c5e032f09a6a5017ac3987811ec91a138b7a837bbb86e4f381" Mar 18 09:52:58.150088 master-0 kubenswrapper[3991]: I0318 09:52:58.149665 3991 scope.go:117] "RemoveContainer" containerID="d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b" Mar 18 09:52:58.419182 master-0 kubenswrapper[3991]: I0318 09:52:58.419063 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 09:52:58.419803 master-0 kubenswrapper[3991]: I0318 09:52:58.419742 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"b4356aff744ddd84b751a19b6b1c926a7d4c3a2ecf0278ac7c42e1a78ef7db64"} Mar 18 09:52:58.634368 master-0 kubenswrapper[3991]: I0318 09:52:58.634269 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=12.634239522 podStartE2EDuration="12.634239522s" podCreationTimestamp="2026-03-18 09:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:52:58.438286072 +0000 UTC m=+62.397226007" watchObservedRunningTime="2026-03-18 09:52:58.634239522 +0000 UTC m=+62.593179457" Mar 18 09:52:58.634715 master-0 kubenswrapper[3991]: I0318 09:52:58.634445 3991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-zzk2p"] Mar 18 09:52:58.637975 master-0 kubenswrapper[3991]: I0318 09:52:58.637750 3991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-zzk2p"] Mar 18 09:52:59.154811 master-0 kubenswrapper[3991]: I0318 09:52:59.154679 3991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3796179a-f6c1-4f97-a2e1-d32106a5d8e9" path="/var/lib/kubelet/pods/3796179a-f6c1-4f97-a2e1-d32106a5d8e9/volumes" Mar 18 09:53:03.577917 master-0 kubenswrapper[3991]: I0318 09:53:03.577844 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xgdvw"] Mar 18 09:53:03.578916 master-0 kubenswrapper[3991]: E0318 09:53:03.577951 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3796179a-f6c1-4f97-a2e1-d32106a5d8e9" containerName="prober" Mar 18 09:53:03.578916 master-0 kubenswrapper[3991]: I0318 09:53:03.577966 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3796179a-f6c1-4f97-a2e1-d32106a5d8e9" containerName="prober" Mar 18 09:53:03.578916 master-0 kubenswrapper[3991]: I0318 09:53:03.577997 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="3796179a-f6c1-4f97-a2e1-d32106a5d8e9" containerName="prober" Mar 18 09:53:03.578916 master-0 kubenswrapper[3991]: I0318 09:53:03.578212 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.580356 master-0 kubenswrapper[3991]: I0318 09:53:03.580309 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 09:53:03.580573 master-0 kubenswrapper[3991]: I0318 09:53:03.580533 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 09:53:03.580961 master-0 kubenswrapper[3991]: I0318 09:53:03.580923 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 09:53:03.581292 master-0 kubenswrapper[3991]: I0318 09:53:03.581224 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 09:53:03.733059 master-0 kubenswrapper[3991]: I0318 09:53:03.732987 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cni-binary-copy\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733059 master-0 kubenswrapper[3991]: I0318 09:53:03.733042 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-netns\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733059 master-0 kubenswrapper[3991]: I0318 09:53:03.733066 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcj8f\" (UniqueName: \"kubernetes.io/projected/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-kube-api-access-hcj8f\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733396 master-0 kubenswrapper[3991]: I0318 09:53:03.733090 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-k8s-cni-cncf-io\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733396 master-0 kubenswrapper[3991]: I0318 09:53:03.733115 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-conf-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733396 master-0 kubenswrapper[3991]: I0318 09:53:03.733205 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cnibin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733396 master-0 kubenswrapper[3991]: I0318 09:53:03.733269 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-bin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733396 master-0 kubenswrapper[3991]: I0318 09:53:03.733348 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-system-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733627 master-0 kubenswrapper[3991]: I0318 09:53:03.733446 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-multus\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733627 master-0 kubenswrapper[3991]: I0318 09:53:03.733549 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-socket-dir-parent\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733751 master-0 kubenswrapper[3991]: I0318 09:53:03.733628 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-kubelet\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733751 master-0 kubenswrapper[3991]: I0318 09:53:03.733691 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-etc-kubernetes\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733906 master-0 kubenswrapper[3991]: I0318 09:53:03.733795 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-daemon-config\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.733906 master-0 kubenswrapper[3991]: I0318 09:53:03.733887 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-multus-certs\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.734019 master-0 kubenswrapper[3991]: I0318 09:53:03.733964 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.734083 master-0 kubenswrapper[3991]: I0318 09:53:03.734012 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-os-release\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.734083 master-0 kubenswrapper[3991]: I0318 09:53:03.734057 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-hostroot\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.780066 master-0 kubenswrapper[3991]: I0318 09:53:03.779996 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dg6dw"] Mar 18 09:53:03.780623 master-0 kubenswrapper[3991]: I0318 09:53:03.780583 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:03.784207 master-0 kubenswrapper[3991]: I0318 09:53:03.783975 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 09:53:03.786467 master-0 kubenswrapper[3991]: I0318 09:53:03.786410 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834398 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cni-binary-copy\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834441 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-netns\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834469 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcj8f\" (UniqueName: \"kubernetes.io/projected/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-kube-api-access-hcj8f\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834493 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-k8s-cni-cncf-io\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834749 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-k8s-cni-cncf-io\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834743 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-conf-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834867 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-conf-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834906 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-netns\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834930 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cnibin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834952 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-bin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834976 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-multus\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834997 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-system-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.835019 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-socket-dir-parent\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.835025 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-bin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.834994 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cnibin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.835040 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-kubelet\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.835110 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-etc-kubernetes\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.835731 master-0 kubenswrapper[3991]: I0318 09:53:03.835115 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-socket-dir-parent\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835077 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-system-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835073 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-kubelet\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835148 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-multus\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835165 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-daemon-config\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835201 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-etc-kubernetes\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835303 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-multus-certs\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835343 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835365 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-os-release\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835383 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-hostroot\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835432 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-hostroot\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835463 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-multus-certs\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835529 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835580 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-os-release\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835609 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cni-binary-copy\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.836952 master-0 kubenswrapper[3991]: I0318 09:53:03.835739 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-daemon-config\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.863264 master-0 kubenswrapper[3991]: I0318 09:53:03.863216 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcj8f\" (UniqueName: \"kubernetes.io/projected/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-kube-api-access-hcj8f\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.890446 master-0 kubenswrapper[3991]: I0318 09:53:03.890366 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xgdvw" Mar 18 09:53:03.936195 master-0 kubenswrapper[3991]: I0318 09:53:03.936043 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-cnibin\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:03.936657 master-0 kubenswrapper[3991]: I0318 09:53:03.936571 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:03.937163 master-0 kubenswrapper[3991]: I0318 09:53:03.937070 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:03.937566 master-0 kubenswrapper[3991]: I0318 09:53:03.937490 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8sl\" (UniqueName: \"kubernetes.io/projected/91331360-dc70-45bb-a815-e00664bae6c4-kube-api-access-8w8sl\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:03.938070 master-0 kubenswrapper[3991]: I0318 09:53:03.937968 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-os-release\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:03.938479 master-0 kubenswrapper[3991]: I0318 09:53:03.938406 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:03.938793 master-0 kubenswrapper[3991]: I0318 09:53:03.938660 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-system-cni-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:03.938793 master-0 kubenswrapper[3991]: I0318 09:53:03.938718 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039210 master-0 kubenswrapper[3991]: I0318 09:53:04.039039 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-cnibin\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039210 master-0 kubenswrapper[3991]: I0318 09:53:04.039122 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039525 master-0 kubenswrapper[3991]: I0318 09:53:04.039420 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039525 master-0 kubenswrapper[3991]: I0318 09:53:04.039483 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039644 master-0 kubenswrapper[3991]: I0318 09:53:04.039531 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8sl\" (UniqueName: \"kubernetes.io/projected/91331360-dc70-45bb-a815-e00664bae6c4-kube-api-access-8w8sl\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039644 master-0 kubenswrapper[3991]: I0318 09:53:04.039554 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-cnibin\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039644 master-0 kubenswrapper[3991]: I0318 09:53:04.039578 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-os-release\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039644 master-0 kubenswrapper[3991]: I0318 09:53:04.039611 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039906 master-0 kubenswrapper[3991]: I0318 09:53:04.039651 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-system-cni-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.039906 master-0 kubenswrapper[3991]: I0318 09:53:04.039684 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.041007 master-0 kubenswrapper[3991]: I0318 09:53:04.040120 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-system-cni-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.041007 master-0 kubenswrapper[3991]: I0318 09:53:04.040209 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-os-release\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.041007 master-0 kubenswrapper[3991]: I0318 09:53:04.040873 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.041007 master-0 kubenswrapper[3991]: I0318 09:53:04.040893 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.041007 master-0 kubenswrapper[3991]: I0318 09:53:04.040955 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.068253 master-0 kubenswrapper[3991]: I0318 09:53:04.068142 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8sl\" (UniqueName: \"kubernetes.io/projected/91331360-dc70-45bb-a815-e00664bae6c4-kube-api-access-8w8sl\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.104110 master-0 kubenswrapper[3991]: I0318 09:53:04.103599 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:53:04.119586 master-0 kubenswrapper[3991]: W0318 09:53:04.119510 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91331360_dc70_45bb_a815_e00664bae6c4.slice/crio-0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f WatchSource:0}: Error finding container 0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f: Status 404 returned error can't find the container with id 0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f Mar 18 09:53:04.438562 master-0 kubenswrapper[3991]: I0318 09:53:04.438483 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerStarted","Data":"0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f"} Mar 18 09:53:04.440446 master-0 kubenswrapper[3991]: I0318 09:53:04.440362 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xgdvw" event={"ID":"03de1ea6-da57-4e13-8e5a-d5e10a9f9957","Type":"ContainerStarted","Data":"d9a9cd3f2878ec84a255f5f74dc3526f3a1623550d44547c9ce47a07a51bb959"} Mar 18 09:53:04.580548 master-0 kubenswrapper[3991]: I0318 09:53:04.579976 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-tbxt4"] Mar 18 09:53:04.580548 master-0 kubenswrapper[3991]: I0318 09:53:04.580315 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:04.580548 master-0 kubenswrapper[3991]: E0318 09:53:04.580375 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:04.745533 master-0 kubenswrapper[3991]: I0318 09:53:04.745354 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x6ht\" (UniqueName: \"kubernetes.io/projected/0442ec6c-5973-40a5-a0c3-dc02de46d343-kube-api-access-5x6ht\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:04.745533 master-0 kubenswrapper[3991]: I0318 09:53:04.745460 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:04.846235 master-0 kubenswrapper[3991]: I0318 09:53:04.846144 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x6ht\" (UniqueName: \"kubernetes.io/projected/0442ec6c-5973-40a5-a0c3-dc02de46d343-kube-api-access-5x6ht\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:04.846235 master-0 kubenswrapper[3991]: I0318 09:53:04.846237 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:04.851891 master-0 kubenswrapper[3991]: E0318 09:53:04.846409 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:04.851891 master-0 kubenswrapper[3991]: E0318 09:53:04.846488 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:05.346460496 +0000 UTC m=+69.305400431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:04.874475 master-0 kubenswrapper[3991]: I0318 09:53:04.874429 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x6ht\" (UniqueName: \"kubernetes.io/projected/0442ec6c-5973-40a5-a0c3-dc02de46d343-kube-api-access-5x6ht\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:05.352098 master-0 kubenswrapper[3991]: I0318 09:53:05.351800 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:05.352098 master-0 kubenswrapper[3991]: E0318 09:53:05.351997 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:05.352098 master-0 kubenswrapper[3991]: E0318 09:53:05.352072 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:06.352050656 +0000 UTC m=+70.310990561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:06.149394 master-0 kubenswrapper[3991]: I0318 09:53:06.148951 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:06.149394 master-0 kubenswrapper[3991]: E0318 09:53:06.149075 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:06.371567 master-0 kubenswrapper[3991]: I0318 09:53:06.371511 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:06.371734 master-0 kubenswrapper[3991]: E0318 09:53:06.371628 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:06.371734 master-0 kubenswrapper[3991]: E0318 09:53:06.371674 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:08.371659213 +0000 UTC m=+72.330599108 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:07.452764 master-0 kubenswrapper[3991]: I0318 09:53:07.452657 3991 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="8ef686cc40f68aff82f23ce87e06ff13fba380e3cd6b61b827160c9e73c4cbbc" exitCode=0 Mar 18 09:53:07.452764 master-0 kubenswrapper[3991]: I0318 09:53:07.452720 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerDied","Data":"8ef686cc40f68aff82f23ce87e06ff13fba380e3cd6b61b827160c9e73c4cbbc"} Mar 18 09:53:08.150142 master-0 kubenswrapper[3991]: I0318 09:53:08.149565 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:08.150142 master-0 kubenswrapper[3991]: E0318 09:53:08.149735 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:08.387870 master-0 kubenswrapper[3991]: I0318 09:53:08.387041 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:08.387870 master-0 kubenswrapper[3991]: E0318 09:53:08.387191 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:08.387870 master-0 kubenswrapper[3991]: E0318 09:53:08.387256 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:12.387235561 +0000 UTC m=+76.346175466 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:10.149968 master-0 kubenswrapper[3991]: I0318 09:53:10.149902 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:10.152316 master-0 kubenswrapper[3991]: E0318 09:53:10.150151 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:12.187394 master-0 kubenswrapper[3991]: I0318 09:53:12.186322 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:12.187394 master-0 kubenswrapper[3991]: E0318 09:53:12.186438 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:12.187394 master-0 kubenswrapper[3991]: I0318 09:53:12.186872 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:53:12.187394 master-0 kubenswrapper[3991]: E0318 09:53:12.186969 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:53:12.187394 master-0 kubenswrapper[3991]: E0318 09:53:12.187006 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:53:44.186990683 +0000 UTC m=+108.145930578 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:53:12.389322 master-0 kubenswrapper[3991]: I0318 09:53:12.389053 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:12.389551 master-0 kubenswrapper[3991]: E0318 09:53:12.389476 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:12.389619 master-0 kubenswrapper[3991]: E0318 09:53:12.389577 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:20.38954494 +0000 UTC m=+84.348484835 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:14.149567 master-0 kubenswrapper[3991]: I0318 09:53:14.149511 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:14.150018 master-0 kubenswrapper[3991]: E0318 09:53:14.149662 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:15.972395 master-0 kubenswrapper[3991]: I0318 09:53:15.969503 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx"] Mar 18 09:53:15.972395 master-0 kubenswrapper[3991]: I0318 09:53:15.969891 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:15.973936 master-0 kubenswrapper[3991]: I0318 09:53:15.973058 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 09:53:15.973936 master-0 kubenswrapper[3991]: I0318 09:53:15.973100 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 09:53:15.973936 master-0 kubenswrapper[3991]: I0318 09:53:15.973168 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 09:53:15.973936 master-0 kubenswrapper[3991]: I0318 09:53:15.973239 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 09:53:15.974460 master-0 kubenswrapper[3991]: I0318 09:53:15.974425 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 09:53:16.016282 master-0 kubenswrapper[3991]: I0318 09:53:16.016219 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.016282 master-0 kubenswrapper[3991]: I0318 09:53:16.016280 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0605021-862d-424a-a4c1-037fb005b77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.016555 master-0 kubenswrapper[3991]: I0318 09:53:16.016307 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-env-overrides\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.016555 master-0 kubenswrapper[3991]: I0318 09:53:16.016327 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxj5c\" (UniqueName: \"kubernetes.io/projected/d0605021-862d-424a-a4c1-037fb005b77e-kube-api-access-cxj5c\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.117084 master-0 kubenswrapper[3991]: I0318 09:53:16.117011 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-env-overrides\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.117084 master-0 kubenswrapper[3991]: I0318 09:53:16.117059 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxj5c\" (UniqueName: \"kubernetes.io/projected/d0605021-862d-424a-a4c1-037fb005b77e-kube-api-access-cxj5c\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.117367 master-0 kubenswrapper[3991]: I0318 09:53:16.117200 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.117367 master-0 kubenswrapper[3991]: I0318 09:53:16.117224 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0605021-862d-424a-a4c1-037fb005b77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.117614 master-0 kubenswrapper[3991]: I0318 09:53:16.117571 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-env-overrides\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.119347 master-0 kubenswrapper[3991]: I0318 09:53:16.118737 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.120960 master-0 kubenswrapper[3991]: I0318 09:53:16.120872 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0605021-862d-424a-a4c1-037fb005b77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.131864 master-0 kubenswrapper[3991]: I0318 09:53:16.131817 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxj5c\" (UniqueName: \"kubernetes.io/projected/d0605021-862d-424a-a4c1-037fb005b77e-kube-api-access-cxj5c\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.149887 master-0 kubenswrapper[3991]: I0318 09:53:16.149727 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:16.149887 master-0 kubenswrapper[3991]: E0318 09:53:16.149868 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:16.195265 master-0 kubenswrapper[3991]: I0318 09:53:16.195172 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tdz4c"] Mar 18 09:53:16.197452 master-0 kubenswrapper[3991]: I0318 09:53:16.197387 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.199966 master-0 kubenswrapper[3991]: I0318 09:53:16.199923 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 09:53:16.200174 master-0 kubenswrapper[3991]: I0318 09:53:16.200132 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 09:53:16.285183 master-0 kubenswrapper[3991]: I0318 09:53:16.285132 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:53:16.302656 master-0 kubenswrapper[3991]: W0318 09:53:16.302612 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0605021_862d_424a_a4c1_037fb005b77e.slice/crio-009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696 WatchSource:0}: Error finding container 009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696: Status 404 returned error can't find the container with id 009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696 Mar 18 09:53:16.319323 master-0 kubenswrapper[3991]: I0318 09:53:16.319277 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-kubelet\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319323 master-0 kubenswrapper[3991]: I0318 09:53:16.319322 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-etc-openvswitch\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319441 master-0 kubenswrapper[3991]: I0318 09:53:16.319347 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319508 master-0 kubenswrapper[3991]: I0318 09:53:16.319439 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-config\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319508 master-0 kubenswrapper[3991]: I0318 09:53:16.319490 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-slash\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319586 master-0 kubenswrapper[3991]: I0318 09:53:16.319523 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-openvswitch\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319586 master-0 kubenswrapper[3991]: I0318 09:53:16.319559 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-bin\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319661 master-0 kubenswrapper[3991]: I0318 09:53:16.319591 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-env-overrides\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319661 master-0 kubenswrapper[3991]: I0318 09:53:16.319621 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4mbp\" (UniqueName: \"kubernetes.io/projected/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-kube-api-access-g4mbp\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319745 master-0 kubenswrapper[3991]: I0318 09:53:16.319687 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-netns\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319745 master-0 kubenswrapper[3991]: I0318 09:53:16.319736 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-node-log\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319842 master-0 kubenswrapper[3991]: I0318 09:53:16.319763 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-log-socket\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319842 master-0 kubenswrapper[3991]: I0318 09:53:16.319807 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-systemd-units\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319922 master-0 kubenswrapper[3991]: I0318 09:53:16.319850 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-systemd\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.319922 master-0 kubenswrapper[3991]: I0318 09:53:16.319894 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-ovn-kubernetes\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.320005 master-0 kubenswrapper[3991]: I0318 09:53:16.319983 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-ovn\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.320059 master-0 kubenswrapper[3991]: I0318 09:53:16.320027 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovn-node-metrics-cert\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.320110 master-0 kubenswrapper[3991]: I0318 09:53:16.320088 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-script-lib\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.320164 master-0 kubenswrapper[3991]: I0318 09:53:16.320138 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-netd\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.320205 master-0 kubenswrapper[3991]: I0318 09:53:16.320177 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-var-lib-openvswitch\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421272 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-ovn\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421337 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovn-node-metrics-cert\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421473 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-ovn\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421574 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-script-lib\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421642 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-netd\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421666 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-var-lib-openvswitch\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421696 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-etc-openvswitch\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421721 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421817 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-netd\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421927 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-kubelet\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421975 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-config\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.421982 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.422015 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-slash\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.422226 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-var-lib-openvswitch\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.422263 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-kubelet\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.422305 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-etc-openvswitch\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.422363 master-0 kubenswrapper[3991]: I0318 09:53:16.422333 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-openvswitch\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422420 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-openvswitch\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422507 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-bin\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422546 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-env-overrides\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422575 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4mbp\" (UniqueName: \"kubernetes.io/projected/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-kube-api-access-g4mbp\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422609 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-netns\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422638 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-node-log\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422711 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-log-socket\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422776 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-systemd\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422813 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-ovn-kubernetes\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422897 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-config\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422897 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-systemd-units\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.422936 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-script-lib\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.423001 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-ovn-kubernetes\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.423151 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-netns\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.423181 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-slash\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423214 master-0 kubenswrapper[3991]: I0318 09:53:16.423213 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-bin\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423865 master-0 kubenswrapper[3991]: I0318 09:53:16.423507 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-log-socket\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423865 master-0 kubenswrapper[3991]: I0318 09:53:16.423542 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-systemd-units\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423865 master-0 kubenswrapper[3991]: I0318 09:53:16.423574 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-systemd\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423865 master-0 kubenswrapper[3991]: I0318 09:53:16.423597 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-node-log\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.423865 master-0 kubenswrapper[3991]: I0318 09:53:16.423659 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-env-overrides\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.426507 master-0 kubenswrapper[3991]: I0318 09:53:16.426463 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovn-node-metrics-cert\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.441979 master-0 kubenswrapper[3991]: I0318 09:53:16.441900 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4mbp\" (UniqueName: \"kubernetes.io/projected/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-kube-api-access-g4mbp\") pod \"ovnkube-node-tdz4c\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.476031 master-0 kubenswrapper[3991]: I0318 09:53:16.475955 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerStarted","Data":"160626554dc940cedbe7ec0ddb596f31e480d63196f634936e05702f85c45819"} Mar 18 09:53:16.477431 master-0 kubenswrapper[3991]: I0318 09:53:16.477382 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xgdvw" event={"ID":"03de1ea6-da57-4e13-8e5a-d5e10a9f9957","Type":"ContainerStarted","Data":"2da220e2852846e9b471d19bf3329629d81b1d881746691dfdddb60fd750adba"} Mar 18 09:53:16.479964 master-0 kubenswrapper[3991]: I0318 09:53:16.479886 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" event={"ID":"d0605021-862d-424a-a4c1-037fb005b77e","Type":"ContainerStarted","Data":"009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696"} Mar 18 09:53:16.512792 master-0 kubenswrapper[3991]: I0318 09:53:16.512667 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xgdvw" podStartSLOduration=1.13670659 podStartE2EDuration="13.512628706s" podCreationTimestamp="2026-03-18 09:53:03 +0000 UTC" firstStartedPulling="2026-03-18 09:53:03.910820319 +0000 UTC m=+67.869760254" lastFinishedPulling="2026-03-18 09:53:16.286742475 +0000 UTC m=+80.245682370" observedRunningTime="2026-03-18 09:53:16.511150548 +0000 UTC m=+80.470090443" watchObservedRunningTime="2026-03-18 09:53:16.512628706 +0000 UTC m=+80.471568601" Mar 18 09:53:16.520196 master-0 kubenswrapper[3991]: I0318 09:53:16.520147 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:16.566088 master-0 kubenswrapper[3991]: W0318 09:53:16.566042 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded0d20a9_cfe9_47fd_a4ca_cb04b881e7fd.slice/crio-5759283fba42fbbd311783807000d9d77eaa5d0bcefb9d4dbe9eb43e6dbcd178 WatchSource:0}: Error finding container 5759283fba42fbbd311783807000d9d77eaa5d0bcefb9d4dbe9eb43e6dbcd178: Status 404 returned error can't find the container with id 5759283fba42fbbd311783807000d9d77eaa5d0bcefb9d4dbe9eb43e6dbcd178 Mar 18 09:53:17.485559 master-0 kubenswrapper[3991]: I0318 09:53:17.485421 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerStarted","Data":"5759283fba42fbbd311783807000d9d77eaa5d0bcefb9d4dbe9eb43e6dbcd178"} Mar 18 09:53:17.486641 master-0 kubenswrapper[3991]: I0318 09:53:17.486608 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" event={"ID":"d0605021-862d-424a-a4c1-037fb005b77e","Type":"ContainerStarted","Data":"8fd0343f3736f8798a80abef616c5f452f165d0a44154cd6c326312df6cc8ae9"} Mar 18 09:53:17.488295 master-0 kubenswrapper[3991]: I0318 09:53:17.488182 3991 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="160626554dc940cedbe7ec0ddb596f31e480d63196f634936e05702f85c45819" exitCode=0 Mar 18 09:53:17.488295 master-0 kubenswrapper[3991]: I0318 09:53:17.488231 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerDied","Data":"160626554dc940cedbe7ec0ddb596f31e480d63196f634936e05702f85c45819"} Mar 18 09:53:18.149233 master-0 kubenswrapper[3991]: I0318 09:53:18.149149 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:18.149737 master-0 kubenswrapper[3991]: E0318 09:53:18.149445 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:20.149433 master-0 kubenswrapper[3991]: I0318 09:53:20.149366 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:20.150411 master-0 kubenswrapper[3991]: E0318 09:53:20.149595 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:20.460043 master-0 kubenswrapper[3991]: I0318 09:53:20.459950 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:20.460197 master-0 kubenswrapper[3991]: E0318 09:53:20.460091 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:20.460197 master-0 kubenswrapper[3991]: E0318 09:53:20.460146 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:36.460128419 +0000 UTC m=+100.419068304 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:21.868339 master-0 kubenswrapper[3991]: I0318 09:53:21.867950 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-42l55"] Mar 18 09:53:21.868339 master-0 kubenswrapper[3991]: I0318 09:53:21.868236 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:21.868339 master-0 kubenswrapper[3991]: E0318 09:53:21.868289 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:21.974036 master-0 kubenswrapper[3991]: I0318 09:53:21.973951 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:22.075049 master-0 kubenswrapper[3991]: I0318 09:53:22.074978 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:22.149966 master-0 kubenswrapper[3991]: I0318 09:53:22.149665 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:22.149966 master-0 kubenswrapper[3991]: E0318 09:53:22.149876 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:23.120038 master-0 kubenswrapper[3991]: E0318 09:53:23.119992 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 09:53:23.120038 master-0 kubenswrapper[3991]: E0318 09:53:23.120036 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 09:53:23.120386 master-0 kubenswrapper[3991]: E0318 09:53:23.120057 3991 projected.go:194] Error preparing data for projected volume kube-api-access-8rzsk for pod openshift-network-diagnostics/network-check-target-42l55: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:23.120386 master-0 kubenswrapper[3991]: E0318 09:53:23.120143 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk podName:74795f5d-dcd7-4723-8931-c34b59ce3087 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:23.620117414 +0000 UTC m=+87.579057319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8rzsk" (UniqueName: "kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk") pod "network-check-target-42l55" (UID: "74795f5d-dcd7-4723-8931-c34b59ce3087") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:23.149608 master-0 kubenswrapper[3991]: I0318 09:53:23.149543 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:23.149836 master-0 kubenswrapper[3991]: E0318 09:53:23.149769 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:23.508259 master-0 kubenswrapper[3991]: I0318 09:53:23.508127 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerStarted","Data":"ba4b50efa1c5a3ef4b380af81a12c8288cb0cec49cd61d28198db983936b1f94"} Mar 18 09:53:23.688521 master-0 kubenswrapper[3991]: I0318 09:53:23.688446 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:23.688774 master-0 kubenswrapper[3991]: E0318 09:53:23.688574 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 09:53:23.688774 master-0 kubenswrapper[3991]: E0318 09:53:23.688590 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 09:53:23.688774 master-0 kubenswrapper[3991]: E0318 09:53:23.688599 3991 projected.go:194] Error preparing data for projected volume kube-api-access-8rzsk for pod openshift-network-diagnostics/network-check-target-42l55: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:23.688774 master-0 kubenswrapper[3991]: E0318 09:53:23.688649 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk podName:74795f5d-dcd7-4723-8931-c34b59ce3087 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:24.688629928 +0000 UTC m=+88.647569823 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8rzsk" (UniqueName: "kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk") pod "network-check-target-42l55" (UID: "74795f5d-dcd7-4723-8931-c34b59ce3087") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:23.779215 master-0 kubenswrapper[3991]: I0318 09:53:23.779069 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:53:24.149585 master-0 kubenswrapper[3991]: I0318 09:53:24.149542 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:24.150190 master-0 kubenswrapper[3991]: E0318 09:53:24.149662 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:24.512152 master-0 kubenswrapper[3991]: I0318 09:53:24.512027 3991 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="ba4b50efa1c5a3ef4b380af81a12c8288cb0cec49cd61d28198db983936b1f94" exitCode=0 Mar 18 09:53:24.512152 master-0 kubenswrapper[3991]: I0318 09:53:24.512093 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerDied","Data":"ba4b50efa1c5a3ef4b380af81a12c8288cb0cec49cd61d28198db983936b1f94"} Mar 18 09:53:24.695429 master-0 kubenswrapper[3991]: I0318 09:53:24.695379 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:24.695536 master-0 kubenswrapper[3991]: E0318 09:53:24.695512 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 09:53:24.695536 master-0 kubenswrapper[3991]: E0318 09:53:24.695531 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 09:53:24.695626 master-0 kubenswrapper[3991]: E0318 09:53:24.695543 3991 projected.go:194] Error preparing data for projected volume kube-api-access-8rzsk for pod openshift-network-diagnostics/network-check-target-42l55: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:24.695626 master-0 kubenswrapper[3991]: E0318 09:53:24.695602 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk podName:74795f5d-dcd7-4723-8931-c34b59ce3087 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:26.695583015 +0000 UTC m=+90.654522920 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8rzsk" (UniqueName: "kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk") pod "network-check-target-42l55" (UID: "74795f5d-dcd7-4723-8931-c34b59ce3087") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:25.149332 master-0 kubenswrapper[3991]: I0318 09:53:25.149038 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:25.149516 master-0 kubenswrapper[3991]: E0318 09:53:25.149432 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:25.708015 master-0 kubenswrapper[3991]: I0318 09:53:25.707885 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=2.707855958 podStartE2EDuration="2.707855958s" podCreationTimestamp="2026-03-18 09:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:53:25.706910994 +0000 UTC m=+89.665850929" watchObservedRunningTime="2026-03-18 09:53:25.707855958 +0000 UTC m=+89.666795903" Mar 18 09:53:26.149488 master-0 kubenswrapper[3991]: I0318 09:53:26.149411 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:26.149732 master-0 kubenswrapper[3991]: E0318 09:53:26.149553 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:26.713662 master-0 kubenswrapper[3991]: I0318 09:53:26.713597 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:26.714436 master-0 kubenswrapper[3991]: E0318 09:53:26.713773 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 09:53:26.714436 master-0 kubenswrapper[3991]: E0318 09:53:26.713803 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 09:53:26.714436 master-0 kubenswrapper[3991]: E0318 09:53:26.713815 3991 projected.go:194] Error preparing data for projected volume kube-api-access-8rzsk for pod openshift-network-diagnostics/network-check-target-42l55: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:26.714436 master-0 kubenswrapper[3991]: E0318 09:53:26.713898 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk podName:74795f5d-dcd7-4723-8931-c34b59ce3087 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:30.713879682 +0000 UTC m=+94.672819577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8rzsk" (UniqueName: "kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk") pod "network-check-target-42l55" (UID: "74795f5d-dcd7-4723-8931-c34b59ce3087") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:27.149811 master-0 kubenswrapper[3991]: I0318 09:53:27.149701 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:27.150769 master-0 kubenswrapper[3991]: E0318 09:53:27.150699 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:28.148952 master-0 kubenswrapper[3991]: I0318 09:53:28.148907 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:28.149513 master-0 kubenswrapper[3991]: E0318 09:53:28.149053 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:29.150288 master-0 kubenswrapper[3991]: I0318 09:53:29.150214 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:29.150759 master-0 kubenswrapper[3991]: E0318 09:53:29.150429 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:30.149409 master-0 kubenswrapper[3991]: I0318 09:53:30.149319 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:30.149637 master-0 kubenswrapper[3991]: E0318 09:53:30.149447 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:30.751113 master-0 kubenswrapper[3991]: I0318 09:53:30.751055 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:30.751671 master-0 kubenswrapper[3991]: E0318 09:53:30.751252 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 09:53:30.751671 master-0 kubenswrapper[3991]: E0318 09:53:30.751287 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 09:53:30.751671 master-0 kubenswrapper[3991]: E0318 09:53:30.751299 3991 projected.go:194] Error preparing data for projected volume kube-api-access-8rzsk for pod openshift-network-diagnostics/network-check-target-42l55: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:30.751671 master-0 kubenswrapper[3991]: E0318 09:53:30.751363 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk podName:74795f5d-dcd7-4723-8931-c34b59ce3087 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:38.751343058 +0000 UTC m=+102.710282953 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8rzsk" (UniqueName: "kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk") pod "network-check-target-42l55" (UID: "74795f5d-dcd7-4723-8931-c34b59ce3087") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:31.149186 master-0 kubenswrapper[3991]: I0318 09:53:31.149145 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:31.149366 master-0 kubenswrapper[3991]: E0318 09:53:31.149273 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:31.465052 master-0 kubenswrapper[3991]: I0318 09:53:31.464938 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-7fl4x"] Mar 18 09:53:31.465449 master-0 kubenswrapper[3991]: I0318 09:53:31.465422 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.466805 master-0 kubenswrapper[3991]: I0318 09:53:31.466664 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 09:53:31.469640 master-0 kubenswrapper[3991]: I0318 09:53:31.469475 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 09:53:31.469640 master-0 kubenswrapper[3991]: I0318 09:53:31.469569 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 09:53:31.472370 master-0 kubenswrapper[3991]: I0318 09:53:31.472219 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 09:53:31.472370 master-0 kubenswrapper[3991]: I0318 09:53:31.472319 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 09:53:31.657012 master-0 kubenswrapper[3991]: I0318 09:53:31.656913 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ktpl\" (UniqueName: \"kubernetes.io/projected/bb942756-bac7-414d-b179-cebdce588a13-kube-api-access-2ktpl\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.657236 master-0 kubenswrapper[3991]: I0318 09:53:31.657038 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.657236 master-0 kubenswrapper[3991]: I0318 09:53:31.657092 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.657236 master-0 kubenswrapper[3991]: I0318 09:53:31.657137 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.758926 master-0 kubenswrapper[3991]: I0318 09:53:31.757760 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ktpl\" (UniqueName: \"kubernetes.io/projected/bb942756-bac7-414d-b179-cebdce588a13-kube-api-access-2ktpl\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.758926 master-0 kubenswrapper[3991]: I0318 09:53:31.757817 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.758926 master-0 kubenswrapper[3991]: I0318 09:53:31.757852 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.758926 master-0 kubenswrapper[3991]: I0318 09:53:31.757880 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.758926 master-0 kubenswrapper[3991]: I0318 09:53:31.758500 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.758926 master-0 kubenswrapper[3991]: I0318 09:53:31.758864 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.765789 master-0 kubenswrapper[3991]: I0318 09:53:31.765740 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:31.820852 master-0 kubenswrapper[3991]: I0318 09:53:31.819661 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ktpl\" (UniqueName: \"kubernetes.io/projected/bb942756-bac7-414d-b179-cebdce588a13-kube-api-access-2ktpl\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:32.077610 master-0 kubenswrapper[3991]: I0318 09:53:32.077539 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:53:32.149445 master-0 kubenswrapper[3991]: I0318 09:53:32.149330 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:32.149727 master-0 kubenswrapper[3991]: E0318 09:53:32.149481 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:33.149425 master-0 kubenswrapper[3991]: I0318 09:53:33.149318 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:33.150382 master-0 kubenswrapper[3991]: E0318 09:53:33.149507 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:34.191944 master-0 kubenswrapper[3991]: I0318 09:53:34.191868 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:34.192690 master-0 kubenswrapper[3991]: E0318 09:53:34.192014 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:34.192690 master-0 kubenswrapper[3991]: I0318 09:53:34.192081 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:34.192690 master-0 kubenswrapper[3991]: E0318 09:53:34.192136 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:36.149184 master-0 kubenswrapper[3991]: I0318 09:53:36.149100 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:36.150031 master-0 kubenswrapper[3991]: I0318 09:53:36.149177 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:36.150031 master-0 kubenswrapper[3991]: E0318 09:53:36.149284 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:36.150031 master-0 kubenswrapper[3991]: E0318 09:53:36.149370 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:36.526679 master-0 kubenswrapper[3991]: I0318 09:53:36.526455 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:36.526679 master-0 kubenswrapper[3991]: E0318 09:53:36.526635 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:36.527085 master-0 kubenswrapper[3991]: E0318 09:53:36.526732 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:08.526705179 +0000 UTC m=+132.485645114 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:53:38.149959 master-0 kubenswrapper[3991]: I0318 09:53:38.149903 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:38.151249 master-0 kubenswrapper[3991]: I0318 09:53:38.149956 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:38.151367 master-0 kubenswrapper[3991]: E0318 09:53:38.151237 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:38.151466 master-0 kubenswrapper[3991]: E0318 09:53:38.151383 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:38.545707 master-0 kubenswrapper[3991]: I0318 09:53:38.545646 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7fl4x" event={"ID":"bb942756-bac7-414d-b179-cebdce588a13","Type":"ContainerStarted","Data":"b58497ff3c8993b13d6f045f9b3aa17b9b5e464305fd642acb69bc40d01db14a"} Mar 18 09:53:38.851187 master-0 kubenswrapper[3991]: I0318 09:53:38.851103 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:38.851430 master-0 kubenswrapper[3991]: E0318 09:53:38.851396 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 09:53:38.851506 master-0 kubenswrapper[3991]: E0318 09:53:38.851439 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 09:53:38.851506 master-0 kubenswrapper[3991]: E0318 09:53:38.851460 3991 projected.go:194] Error preparing data for projected volume kube-api-access-8rzsk for pod openshift-network-diagnostics/network-check-target-42l55: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:38.851630 master-0 kubenswrapper[3991]: E0318 09:53:38.851539 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk podName:74795f5d-dcd7-4723-8931-c34b59ce3087 nodeName:}" failed. No retries permitted until 2026-03-18 09:53:54.85151162 +0000 UTC m=+118.810451555 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8rzsk" (UniqueName: "kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk") pod "network-check-target-42l55" (UID: "74795f5d-dcd7-4723-8931-c34b59ce3087") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:40.149178 master-0 kubenswrapper[3991]: I0318 09:53:40.149091 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:40.149729 master-0 kubenswrapper[3991]: I0318 09:53:40.149188 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:40.149729 master-0 kubenswrapper[3991]: E0318 09:53:40.149244 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:40.149729 master-0 kubenswrapper[3991]: E0318 09:53:40.149350 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:41.215515 master-0 kubenswrapper[3991]: W0318 09:53:41.215443 3991 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 09:53:41.217849 master-0 kubenswrapper[3991]: I0318 09:53:41.217749 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 09:53:41.558360 master-0 kubenswrapper[3991]: I0318 09:53:41.558266 3991 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="8de3d5cda49c071629c169597f57fc4a39ffa0565faf4afa9da96f88d8b22b28" exitCode=0 Mar 18 09:53:41.558563 master-0 kubenswrapper[3991]: I0318 09:53:41.558465 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerDied","Data":"8de3d5cda49c071629c169597f57fc4a39ffa0565faf4afa9da96f88d8b22b28"} Mar 18 09:53:42.149487 master-0 kubenswrapper[3991]: I0318 09:53:42.149281 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:42.149487 master-0 kubenswrapper[3991]: I0318 09:53:42.149304 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:42.149487 master-0 kubenswrapper[3991]: E0318 09:53:42.149471 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:42.149950 master-0 kubenswrapper[3991]: E0318 09:53:42.149602 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:42.564409 master-0 kubenswrapper[3991]: I0318 09:53:42.564328 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" event={"ID":"d0605021-862d-424a-a4c1-037fb005b77e","Type":"ContainerStarted","Data":"eb346301fe01e98fabdb59a67db563268a1e2d2d2c9e4e2f98ed640abf5fcf03"} Mar 18 09:53:42.567902 master-0 kubenswrapper[3991]: I0318 09:53:42.567797 3991 generic.go:334] "Generic (PLEG): container finished" podID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerID="ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e" exitCode=0 Mar 18 09:53:42.568010 master-0 kubenswrapper[3991]: I0318 09:53:42.567897 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e"} Mar 18 09:53:42.571174 master-0 kubenswrapper[3991]: I0318 09:53:42.571099 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7fl4x" event={"ID":"bb942756-bac7-414d-b179-cebdce588a13","Type":"ContainerStarted","Data":"59a8b56b2c5b54ef9ce1252e4c00aeb3ab2ee3eaf825a3df0fbff0f1e980170f"} Mar 18 09:53:43.576742 master-0 kubenswrapper[3991]: I0318 09:53:43.576218 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerStarted","Data":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} Mar 18 09:53:43.767729 master-0 kubenswrapper[3991]: I0318 09:53:43.767659 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=2.767641998 podStartE2EDuration="2.767641998s" podCreationTimestamp="2026-03-18 09:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:53:43.767381281 +0000 UTC m=+107.726321176" watchObservedRunningTime="2026-03-18 09:53:43.767641998 +0000 UTC m=+107.726581893" Mar 18 09:53:44.149004 master-0 kubenswrapper[3991]: I0318 09:53:44.148806 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:44.149004 master-0 kubenswrapper[3991]: I0318 09:53:44.148883 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:44.149004 master-0 kubenswrapper[3991]: E0318 09:53:44.148954 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:44.149245 master-0 kubenswrapper[3991]: E0318 09:53:44.149027 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:44.199240 master-0 kubenswrapper[3991]: I0318 09:53:44.199183 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:53:44.199441 master-0 kubenswrapper[3991]: E0318 09:53:44.199399 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:53:44.199515 master-0 kubenswrapper[3991]: E0318 09:53:44.199497 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.199475662 +0000 UTC m=+172.158415557 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:53:44.581576 master-0 kubenswrapper[3991]: I0318 09:53:44.581523 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerStarted","Data":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} Mar 18 09:53:44.581576 master-0 kubenswrapper[3991]: I0318 09:53:44.581568 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerStarted","Data":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} Mar 18 09:53:44.583538 master-0 kubenswrapper[3991]: I0318 09:53:44.583487 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7fl4x" event={"ID":"bb942756-bac7-414d-b179-cebdce588a13","Type":"ContainerStarted","Data":"11b5b6c3c569b883f4e3bfd269fb3345429d4cace9fc05301ab08ee60a18aa95"} Mar 18 09:53:45.589686 master-0 kubenswrapper[3991]: I0318 09:53:45.589596 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerStarted","Data":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} Mar 18 09:53:45.664491 master-0 kubenswrapper[3991]: I0318 09:53:45.664128 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-7fl4x" podStartSLOduration=11.776217073 podStartE2EDuration="14.664107582s" podCreationTimestamp="2026-03-18 09:53:31 +0000 UTC" firstStartedPulling="2026-03-18 09:53:38.498600404 +0000 UTC m=+102.457540339" lastFinishedPulling="2026-03-18 09:53:41.386490923 +0000 UTC m=+105.345430848" observedRunningTime="2026-03-18 09:53:45.663615279 +0000 UTC m=+109.622555184" watchObservedRunningTime="2026-03-18 09:53:45.664107582 +0000 UTC m=+109.623047497" Mar 18 09:53:45.671878 master-0 kubenswrapper[3991]: I0318 09:53:45.664816 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" podStartSLOduration=5.874272035 podStartE2EDuration="30.66480796s" podCreationTimestamp="2026-03-18 09:53:15 +0000 UTC" firstStartedPulling="2026-03-18 09:53:16.595116556 +0000 UTC m=+80.554056451" lastFinishedPulling="2026-03-18 09:53:41.385652451 +0000 UTC m=+105.344592376" observedRunningTime="2026-03-18 09:53:44.596366677 +0000 UTC m=+108.555306612" watchObservedRunningTime="2026-03-18 09:53:45.66480796 +0000 UTC m=+109.623747875" Mar 18 09:53:46.149865 master-0 kubenswrapper[3991]: I0318 09:53:46.149646 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:46.149865 master-0 kubenswrapper[3991]: I0318 09:53:46.149710 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:46.149865 master-0 kubenswrapper[3991]: E0318 09:53:46.149778 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:46.150377 master-0 kubenswrapper[3991]: E0318 09:53:46.150306 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:46.597571 master-0 kubenswrapper[3991]: I0318 09:53:46.597511 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerStarted","Data":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} Mar 18 09:53:47.603502 master-0 kubenswrapper[3991]: I0318 09:53:47.603409 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerStarted","Data":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} Mar 18 09:53:48.149755 master-0 kubenswrapper[3991]: I0318 09:53:48.149659 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:48.149755 master-0 kubenswrapper[3991]: I0318 09:53:48.149712 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:48.150130 master-0 kubenswrapper[3991]: E0318 09:53:48.149800 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:48.150130 master-0 kubenswrapper[3991]: E0318 09:53:48.149889 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:48.177998 master-0 kubenswrapper[3991]: I0318 09:53:48.177630 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 09:53:50.148924 master-0 kubenswrapper[3991]: I0318 09:53:50.148818 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:50.148924 master-0 kubenswrapper[3991]: I0318 09:53:50.148808 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:50.150283 master-0 kubenswrapper[3991]: E0318 09:53:50.148975 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:50.150283 master-0 kubenswrapper[3991]: E0318 09:53:50.148992 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:51.162294 master-0 kubenswrapper[3991]: I0318 09:53:51.162224 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 09:53:51.617804 master-0 kubenswrapper[3991]: I0318 09:53:51.617746 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerStarted","Data":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} Mar 18 09:53:52.149356 master-0 kubenswrapper[3991]: I0318 09:53:52.149073 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:52.149554 master-0 kubenswrapper[3991]: I0318 09:53:52.149076 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:52.149635 master-0 kubenswrapper[3991]: E0318 09:53:52.149439 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:52.149635 master-0 kubenswrapper[3991]: E0318 09:53:52.149535 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:52.625047 master-0 kubenswrapper[3991]: I0318 09:53:52.624989 3991 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="f03028f16df79cfb2d65134dc28295edb8b443255b855706b86769e87e1604c6" exitCode=0 Mar 18 09:53:52.625047 master-0 kubenswrapper[3991]: I0318 09:53:52.625047 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerDied","Data":"f03028f16df79cfb2d65134dc28295edb8b443255b855706b86769e87e1604c6"} Mar 18 09:53:52.789776 master-0 kubenswrapper[3991]: I0318 09:53:52.789691 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=5.7896676110000005 podStartE2EDuration="5.789667611s" podCreationTimestamp="2026-03-18 09:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:53:52.777385538 +0000 UTC m=+116.736325473" watchObservedRunningTime="2026-03-18 09:53:52.789667611 +0000 UTC m=+116.748607506" Mar 18 09:53:52.810436 master-0 kubenswrapper[3991]: I0318 09:53:52.810334 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=1.810308357 podStartE2EDuration="1.810308357s" podCreationTimestamp="2026-03-18 09:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:53:52.790615545 +0000 UTC m=+116.749555440" watchObservedRunningTime="2026-03-18 09:53:52.810308357 +0000 UTC m=+116.769248282" Mar 18 09:53:53.630201 master-0 kubenswrapper[3991]: I0318 09:53:53.630139 3991 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="0538eb942c1197a086b3273af768571780d6d5af303141476810f1cd7daec3cc" exitCode=0 Mar 18 09:53:53.630201 master-0 kubenswrapper[3991]: I0318 09:53:53.630183 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerDied","Data":"0538eb942c1197a086b3273af768571780d6d5af303141476810f1cd7daec3cc"} Mar 18 09:53:54.149639 master-0 kubenswrapper[3991]: I0318 09:53:54.149572 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:54.151070 master-0 kubenswrapper[3991]: I0318 09:53:54.149584 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:54.151070 master-0 kubenswrapper[3991]: E0318 09:53:54.149745 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:54.151070 master-0 kubenswrapper[3991]: E0318 09:53:54.149943 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:54.642538 master-0 kubenswrapper[3991]: I0318 09:53:54.642406 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerStarted","Data":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} Mar 18 09:53:54.644381 master-0 kubenswrapper[3991]: I0318 09:53:54.642967 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:54.657663 master-0 kubenswrapper[3991]: I0318 09:53:54.657590 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" event={"ID":"91331360-dc70-45bb-a815-e00664bae6c4","Type":"ContainerStarted","Data":"abb3d8ca0a56744f6cf68b24b0b055c8e41b48f502a9daa062aec4e0fa202639"} Mar 18 09:53:54.674243 master-0 kubenswrapper[3991]: I0318 09:53:54.674011 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:54.700506 master-0 kubenswrapper[3991]: I0318 09:53:54.700408 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dg6dw" podStartSLOduration=4.20005114 podStartE2EDuration="51.70039054s" podCreationTimestamp="2026-03-18 09:53:03 +0000 UTC" firstStartedPulling="2026-03-18 09:53:04.122018402 +0000 UTC m=+68.080958337" lastFinishedPulling="2026-03-18 09:53:51.622357832 +0000 UTC m=+115.581297737" observedRunningTime="2026-03-18 09:53:54.698537073 +0000 UTC m=+118.657477018" watchObservedRunningTime="2026-03-18 09:53:54.70039054 +0000 UTC m=+118.659330445" Mar 18 09:53:54.700732 master-0 kubenswrapper[3991]: I0318 09:53:54.700635 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podStartSLOduration=14.100178801 podStartE2EDuration="38.700607696s" podCreationTimestamp="2026-03-18 09:53:16 +0000 UTC" firstStartedPulling="2026-03-18 09:53:16.568198161 +0000 UTC m=+80.527138056" lastFinishedPulling="2026-03-18 09:53:41.168627016 +0000 UTC m=+105.127566951" observedRunningTime="2026-03-18 09:53:54.682087844 +0000 UTC m=+118.641027829" watchObservedRunningTime="2026-03-18 09:53:54.700607696 +0000 UTC m=+118.659547611" Mar 18 09:53:54.903365 master-0 kubenswrapper[3991]: I0318 09:53:54.903098 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:54.903365 master-0 kubenswrapper[3991]: E0318 09:53:54.903300 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 09:53:54.903365 master-0 kubenswrapper[3991]: E0318 09:53:54.903333 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 09:53:54.903365 master-0 kubenswrapper[3991]: E0318 09:53:54.903348 3991 projected.go:194] Error preparing data for projected volume kube-api-access-8rzsk for pod openshift-network-diagnostics/network-check-target-42l55: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:54.903767 master-0 kubenswrapper[3991]: E0318 09:53:54.903413 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk podName:74795f5d-dcd7-4723-8931-c34b59ce3087 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:26.903391388 +0000 UTC m=+150.862331293 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8rzsk" (UniqueName: "kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk") pod "network-check-target-42l55" (UID: "74795f5d-dcd7-4723-8931-c34b59ce3087") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 09:53:55.664353 master-0 kubenswrapper[3991]: I0318 09:53:55.664293 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:55.664353 master-0 kubenswrapper[3991]: I0318 09:53:55.664351 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:55.731484 master-0 kubenswrapper[3991]: I0318 09:53:55.731170 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:56.027043 master-0 kubenswrapper[3991]: I0318 09:53:56.026871 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tbxt4"] Mar 18 09:53:56.027305 master-0 kubenswrapper[3991]: I0318 09:53:56.027081 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:56.027305 master-0 kubenswrapper[3991]: E0318 09:53:56.027283 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:56.029123 master-0 kubenswrapper[3991]: I0318 09:53:56.029057 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-42l55"] Mar 18 09:53:56.029279 master-0 kubenswrapper[3991]: I0318 09:53:56.029196 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:56.029386 master-0 kubenswrapper[3991]: E0318 09:53:56.029284 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:56.918739 master-0 kubenswrapper[3991]: E0318 09:53:56.918673 3991 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 18 09:53:57.149862 master-0 kubenswrapper[3991]: I0318 09:53:57.149789 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:57.150086 master-0 kubenswrapper[3991]: E0318 09:53:57.149961 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:57.201044 master-0 kubenswrapper[3991]: E0318 09:53:57.200858 3991 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 09:53:58.149660 master-0 kubenswrapper[3991]: I0318 09:53:58.149568 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:53:58.150515 master-0 kubenswrapper[3991]: E0318 09:53:58.149772 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:53:58.709361 master-0 kubenswrapper[3991]: I0318 09:53:58.702284 3991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tdz4c"] Mar 18 09:53:58.709361 master-0 kubenswrapper[3991]: I0318 09:53:58.702815 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovn-controller" containerID="cri-o://fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e" gracePeriod=30 Mar 18 09:53:58.709361 master-0 kubenswrapper[3991]: I0318 09:53:58.703398 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="sbdb" containerID="cri-o://141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981" gracePeriod=30 Mar 18 09:53:58.709361 master-0 kubenswrapper[3991]: I0318 09:53:58.703462 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="nbdb" containerID="cri-o://3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286" gracePeriod=30 Mar 18 09:53:58.709361 master-0 kubenswrapper[3991]: I0318 09:53:58.703559 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="northd" containerID="cri-o://e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead" gracePeriod=30 Mar 18 09:53:58.709361 master-0 kubenswrapper[3991]: I0318 09:53:58.703622 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7" gracePeriod=30 Mar 18 09:53:58.709361 master-0 kubenswrapper[3991]: I0318 09:53:58.703681 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kube-rbac-proxy-node" containerID="cri-o://232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f" gracePeriod=30 Mar 18 09:53:58.709361 master-0 kubenswrapper[3991]: I0318 09:53:58.703797 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovn-acl-logging" containerID="cri-o://f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a" gracePeriod=30 Mar 18 09:53:58.729402 master-0 kubenswrapper[3991]: I0318 09:53:58.729328 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovnkube-controller" containerID="cri-o://b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1" gracePeriod=30 Mar 18 09:53:58.741899 master-0 kubenswrapper[3991]: I0318 09:53:58.741735 3991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovnkube-controller" probeResult="failure" output="" Mar 18 09:53:58.986999 master-0 kubenswrapper[3991]: I0318 09:53:58.986952 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/ovnkube-controller/0.log" Mar 18 09:53:58.988780 master-0 kubenswrapper[3991]: I0318 09:53:58.988759 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/kube-rbac-proxy-ovn-metrics/0.log" Mar 18 09:53:58.989300 master-0 kubenswrapper[3991]: I0318 09:53:58.989284 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/kube-rbac-proxy-node/0.log" Mar 18 09:53:58.989751 master-0 kubenswrapper[3991]: I0318 09:53:58.989731 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/ovn-acl-logging/0.log" Mar 18 09:53:58.990283 master-0 kubenswrapper[3991]: I0318 09:53:58.990259 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/ovn-controller/0.log" Mar 18 09:53:58.990711 master-0 kubenswrapper[3991]: I0318 09:53:58.990693 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030647 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-frnfl"] Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: E0318 09:53:59.030754 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="northd" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030766 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="northd" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: E0318 09:53:59.030775 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovn-controller" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030782 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovn-controller" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: E0318 09:53:59.030789 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kubecfg-setup" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030795 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kubecfg-setup" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: E0318 09:53:59.030802 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030807 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: E0318 09:53:59.030814 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovnkube-controller" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030834 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovnkube-controller" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: E0318 09:53:59.030841 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovn-acl-logging" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030846 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovn-acl-logging" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: E0318 09:53:59.030852 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="nbdb" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030859 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="nbdb" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: E0318 09:53:59.030866 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="sbdb" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030872 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="sbdb" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: E0318 09:53:59.030878 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kube-rbac-proxy-node" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030884 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kube-rbac-proxy-node" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030917 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="nbdb" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030924 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovn-controller" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030930 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030938 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="sbdb" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030945 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovn-acl-logging" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030952 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="kube-rbac-proxy-node" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030959 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="northd" Mar 18 09:53:59.032025 master-0 kubenswrapper[3991]: I0318 09:53:59.030967 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerName="ovnkube-controller" Mar 18 09:53:59.036673 master-0 kubenswrapper[3991]: I0318 09:53:59.036632 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.144637 master-0 kubenswrapper[3991]: I0318 09:53:59.144559 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-systemd-units\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.144886 master-0 kubenswrapper[3991]: I0318 09:53:59.144660 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-netns\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.144886 master-0 kubenswrapper[3991]: I0318 09:53:59.144705 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-kubelet\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.144886 master-0 kubenswrapper[3991]: I0318 09:53:59.144759 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-node-log\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.144886 master-0 kubenswrapper[3991]: I0318 09:53:59.144768 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.144886 master-0 kubenswrapper[3991]: I0318 09:53:59.144798 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-bin\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.144886 master-0 kubenswrapper[3991]: I0318 09:53:59.144873 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.144927 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-config\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.144978 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-env-overrides\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.144982 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145021 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-node-log" (OuterVolumeSpecName: "node-log") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145030 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4mbp\" (UniqueName: \"kubernetes.io/projected/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-kube-api-access-g4mbp\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145002 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145068 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145103 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-netd\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145144 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145177 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-slash\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145238 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-slash" (OuterVolumeSpecName: "host-slash") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145267 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-ovn\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145336 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145365 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-var-lib-openvswitch\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.145373 master-0 kubenswrapper[3991]: I0318 09:53:59.145395 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145462 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145475 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-log-socket\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145502 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-log-socket" (OuterVolumeSpecName: "log-socket") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145614 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-ovn-kubernetes\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145666 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-script-lib\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145695 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-openvswitch\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145721 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-systemd\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145718 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145740 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145746 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovn-node-metrics-cert\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145779 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145782 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-etc-openvswitch\") pod \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\" (UID: \"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd\") " Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145802 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.146188 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.146227 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:53:59.146383 master-0 kubenswrapper[3991]: I0318 09:53:59.145943 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-systemd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146300 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmffc\" (UniqueName: \"kubernetes.io/projected/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-kube-api-access-gmffc\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146328 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-script-lib\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146384 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-etc-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146425 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146508 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146555 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-config\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146608 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-slash\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146637 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovn-node-metrics-cert\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146666 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-var-lib-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146696 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-ovn\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146724 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-node-log\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146753 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-systemd-units\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146780 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-netns\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146810 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-netd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146863 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-kubelet\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.147353 master-0 kubenswrapper[3991]: I0318 09:53:59.146893 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-bin\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.146925 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-log-socket\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.146972 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147002 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-env-overrides\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147045 3991 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147065 3991 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147085 3991 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147103 3991 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147120 3991 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147137 3991 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147155 3991 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147172 3991 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147189 3991 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-node-log\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147205 3991 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147223 3991 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147241 3991 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147258 3991 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147277 3991 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147294 3991 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147311 3991 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.148313 master-0 kubenswrapper[3991]: I0318 09:53:59.147329 3991 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.149767 master-0 kubenswrapper[3991]: I0318 09:53:59.149713 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:53:59.150703 master-0 kubenswrapper[3991]: E0318 09:53:59.149904 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:53:59.151225 master-0 kubenswrapper[3991]: I0318 09:53:59.151162 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-kube-api-access-g4mbp" (OuterVolumeSpecName: "kube-api-access-g4mbp") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "kube-api-access-g4mbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:53:59.151680 master-0 kubenswrapper[3991]: I0318 09:53:59.151613 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:53:59.155301 master-0 kubenswrapper[3991]: I0318 09:53:59.155260 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" (UID: "ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:53:59.248986 master-0 kubenswrapper[3991]: I0318 09:53:59.248576 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-etc-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.248986 master-0 kubenswrapper[3991]: I0318 09:53:59.248672 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.248986 master-0 kubenswrapper[3991]: I0318 09:53:59.248712 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.248986 master-0 kubenswrapper[3991]: I0318 09:53:59.248967 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-config\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249006 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-etc-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249096 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249192 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249274 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-slash\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249331 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-slash\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249366 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovn-node-metrics-cert\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249398 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-systemd-units\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249421 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-var-lib-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249444 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-ovn\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249488 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-var-lib-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249521 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-node-log\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.249554 master-0 kubenswrapper[3991]: I0318 09:53:59.249564 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-netns\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249594 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-netd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249640 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-kubelet\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249648 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-ovn\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249667 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-netns\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249673 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-bin\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249733 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-bin\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249740 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-systemd-units\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249753 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-log-socket\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249646 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-node-log\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249780 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-log-socket\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249821 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-kubelet\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249853 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249881 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-netd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249897 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-env-overrides\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249926 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249932 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-systemd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.249966 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmffc\" (UniqueName: \"kubernetes.io/projected/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-kube-api-access-gmffc\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.250562 master-0 kubenswrapper[3991]: I0318 09:53:59.250001 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-script-lib\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.251872 master-0 kubenswrapper[3991]: I0318 09:53:59.250049 3991 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.251872 master-0 kubenswrapper[3991]: I0318 09:53:59.250070 3991 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.251872 master-0 kubenswrapper[3991]: I0318 09:53:59.250095 3991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4mbp\" (UniqueName: \"kubernetes.io/projected/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd-kube-api-access-g4mbp\") on node \"master-0\" DevicePath \"\"" Mar 18 09:53:59.251872 master-0 kubenswrapper[3991]: I0318 09:53:59.250115 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-systemd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.251872 master-0 kubenswrapper[3991]: I0318 09:53:59.250742 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-env-overrides\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.251872 master-0 kubenswrapper[3991]: I0318 09:53:59.250898 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-config\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.254112 master-0 kubenswrapper[3991]: I0318 09:53:59.253759 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-script-lib\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.256188 master-0 kubenswrapper[3991]: I0318 09:53:59.256140 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovn-node-metrics-cert\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.276567 master-0 kubenswrapper[3991]: I0318 09:53:59.276503 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmffc\" (UniqueName: \"kubernetes.io/projected/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-kube-api-access-gmffc\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.363164 master-0 kubenswrapper[3991]: I0318 09:53:59.363094 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:53:59.382726 master-0 kubenswrapper[3991]: W0318 09:53:59.382398 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d02e790_b9d0_4e2d_a97d_ec2eaf720f28.slice/crio-cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa WatchSource:0}: Error finding container cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa: Status 404 returned error can't find the container with id cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa Mar 18 09:53:59.677791 master-0 kubenswrapper[3991]: I0318 09:53:59.677670 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerStarted","Data":"cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa"} Mar 18 09:53:59.680138 master-0 kubenswrapper[3991]: I0318 09:53:59.680085 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/ovnkube-controller/0.log" Mar 18 09:53:59.682548 master-0 kubenswrapper[3991]: I0318 09:53:59.682492 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/kube-rbac-proxy-ovn-metrics/0.log" Mar 18 09:53:59.683093 master-0 kubenswrapper[3991]: I0318 09:53:59.683041 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/kube-rbac-proxy-node/0.log" Mar 18 09:53:59.684137 master-0 kubenswrapper[3991]: I0318 09:53:59.684075 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/ovn-acl-logging/0.log" Mar 18 09:53:59.685022 master-0 kubenswrapper[3991]: I0318 09:53:59.684973 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tdz4c_ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/ovn-controller/0.log" Mar 18 09:53:59.685543 master-0 kubenswrapper[3991]: I0318 09:53:59.685469 3991 generic.go:334] "Generic (PLEG): container finished" podID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerID="b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1" exitCode=2 Mar 18 09:53:59.685543 master-0 kubenswrapper[3991]: I0318 09:53:59.685518 3991 generic.go:334] "Generic (PLEG): container finished" podID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerID="141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981" exitCode=0 Mar 18 09:53:59.685770 master-0 kubenswrapper[3991]: I0318 09:53:59.685544 3991 generic.go:334] "Generic (PLEG): container finished" podID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerID="3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286" exitCode=0 Mar 18 09:53:59.685770 master-0 kubenswrapper[3991]: I0318 09:53:59.685574 3991 generic.go:334] "Generic (PLEG): container finished" podID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerID="e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead" exitCode=0 Mar 18 09:53:59.685770 master-0 kubenswrapper[3991]: I0318 09:53:59.685611 3991 generic.go:334] "Generic (PLEG): container finished" podID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerID="b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7" exitCode=143 Mar 18 09:53:59.685770 master-0 kubenswrapper[3991]: I0318 09:53:59.685635 3991 generic.go:334] "Generic (PLEG): container finished" podID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerID="232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f" exitCode=143 Mar 18 09:53:59.685770 master-0 kubenswrapper[3991]: I0318 09:53:59.685654 3991 generic.go:334] "Generic (PLEG): container finished" podID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerID="f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a" exitCode=143 Mar 18 09:53:59.685770 master-0 kubenswrapper[3991]: I0318 09:53:59.685676 3991 generic.go:334] "Generic (PLEG): container finished" podID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" containerID="fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e" exitCode=143 Mar 18 09:53:59.685770 master-0 kubenswrapper[3991]: I0318 09:53:59.685564 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} Mar 18 09:53:59.685770 master-0 kubenswrapper[3991]: I0318 09:53:59.685652 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" Mar 18 09:53:59.685770 master-0 kubenswrapper[3991]: I0318 09:53:59.685757 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} Mar 18 09:53:59.686383 master-0 kubenswrapper[3991]: I0318 09:53:59.685797 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} Mar 18 09:53:59.686383 master-0 kubenswrapper[3991]: I0318 09:53:59.685867 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} Mar 18 09:53:59.686383 master-0 kubenswrapper[3991]: I0318 09:53:59.685884 3991 scope.go:117] "RemoveContainer" containerID="b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1" Mar 18 09:53:59.686383 master-0 kubenswrapper[3991]: I0318 09:53:59.685902 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} Mar 18 09:53:59.686383 master-0 kubenswrapper[3991]: I0318 09:53:59.686247 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} Mar 18 09:53:59.686383 master-0 kubenswrapper[3991]: I0318 09:53:59.686288 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} Mar 18 09:53:59.686383 master-0 kubenswrapper[3991]: I0318 09:53:59.686347 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} Mar 18 09:53:59.686383 master-0 kubenswrapper[3991]: I0318 09:53:59.686372 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e"} Mar 18 09:53:59.686383 master-0 kubenswrapper[3991]: I0318 09:53:59.686390 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} Mar 18 09:53:59.686998 master-0 kubenswrapper[3991]: I0318 09:53:59.686407 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} Mar 18 09:53:59.686998 master-0 kubenswrapper[3991]: I0318 09:53:59.686420 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} Mar 18 09:53:59.686998 master-0 kubenswrapper[3991]: I0318 09:53:59.686432 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} Mar 18 09:53:59.686998 master-0 kubenswrapper[3991]: I0318 09:53:59.686442 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} Mar 18 09:53:59.686998 master-0 kubenswrapper[3991]: I0318 09:53:59.686524 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} Mar 18 09:53:59.686998 master-0 kubenswrapper[3991]: I0318 09:53:59.686536 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} Mar 18 09:53:59.687790 master-0 kubenswrapper[3991]: I0318 09:53:59.686546 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687806 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687844 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687861 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687881 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687891 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687898 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687905 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687912 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687920 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687927 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687934 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687959 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687969 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdz4c" event={"ID":"ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd","Type":"ContainerDied","Data":"5759283fba42fbbd311783807000d9d77eaa5d0bcefb9d4dbe9eb43e6dbcd178"} Mar 18 09:53:59.687957 master-0 kubenswrapper[3991]: I0318 09:53:59.687983 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} Mar 18 09:53:59.691055 master-0 kubenswrapper[3991]: I0318 09:53:59.687992 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} Mar 18 09:53:59.691055 master-0 kubenswrapper[3991]: I0318 09:53:59.688000 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} Mar 18 09:53:59.691055 master-0 kubenswrapper[3991]: I0318 09:53:59.688007 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} Mar 18 09:53:59.691055 master-0 kubenswrapper[3991]: I0318 09:53:59.688014 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} Mar 18 09:53:59.691055 master-0 kubenswrapper[3991]: I0318 09:53:59.688110 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} Mar 18 09:53:59.691055 master-0 kubenswrapper[3991]: I0318 09:53:59.688267 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} Mar 18 09:53:59.691055 master-0 kubenswrapper[3991]: I0318 09:53:59.688712 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} Mar 18 09:53:59.691055 master-0 kubenswrapper[3991]: I0318 09:53:59.688730 3991 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e"} Mar 18 09:53:59.702160 master-0 kubenswrapper[3991]: I0318 09:53:59.702003 3991 scope.go:117] "RemoveContainer" containerID="141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981" Mar 18 09:53:59.723368 master-0 kubenswrapper[3991]: I0318 09:53:59.721930 3991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tdz4c"] Mar 18 09:53:59.733529 master-0 kubenswrapper[3991]: I0318 09:53:59.733467 3991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tdz4c"] Mar 18 09:53:59.785805 master-0 kubenswrapper[3991]: I0318 09:53:59.785601 3991 scope.go:117] "RemoveContainer" containerID="3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286" Mar 18 09:53:59.795255 master-0 kubenswrapper[3991]: I0318 09:53:59.795205 3991 scope.go:117] "RemoveContainer" containerID="e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead" Mar 18 09:53:59.804156 master-0 kubenswrapper[3991]: I0318 09:53:59.804105 3991 scope.go:117] "RemoveContainer" containerID="b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7" Mar 18 09:53:59.815137 master-0 kubenswrapper[3991]: I0318 09:53:59.815095 3991 scope.go:117] "RemoveContainer" containerID="232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f" Mar 18 09:53:59.826948 master-0 kubenswrapper[3991]: I0318 09:53:59.826902 3991 scope.go:117] "RemoveContainer" containerID="f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a" Mar 18 09:53:59.835527 master-0 kubenswrapper[3991]: I0318 09:53:59.835485 3991 scope.go:117] "RemoveContainer" containerID="fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e" Mar 18 09:53:59.845575 master-0 kubenswrapper[3991]: I0318 09:53:59.845540 3991 scope.go:117] "RemoveContainer" containerID="ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: I0318 09:53:59.854371 3991 scope.go:117] "RemoveContainer" containerID="b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: E0318 09:53:59.854784 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": container with ID starting with b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1 not found: ID does not exist" containerID="b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: I0318 09:53:59.854888 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} err="failed to get container status \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": rpc error: code = NotFound desc = could not find container \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": container with ID starting with b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1 not found: ID does not exist" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: I0318 09:53:59.854937 3991 scope.go:117] "RemoveContainer" containerID="141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: E0318 09:53:59.855713 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": container with ID starting with 141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981 not found: ID does not exist" containerID="141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: I0318 09:53:59.855739 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} err="failed to get container status \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": rpc error: code = NotFound desc = could not find container \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": container with ID starting with 141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981 not found: ID does not exist" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: I0318 09:53:59.855760 3991 scope.go:117] "RemoveContainer" containerID="3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: E0318 09:53:59.856032 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": container with ID starting with 3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286 not found: ID does not exist" containerID="3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: I0318 09:53:59.856051 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} err="failed to get container status \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": rpc error: code = NotFound desc = could not find container \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": container with ID starting with 3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286 not found: ID does not exist" Mar 18 09:53:59.856100 master-0 kubenswrapper[3991]: I0318 09:53:59.856124 3991 scope.go:117] "RemoveContainer" containerID="e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead" Mar 18 09:53:59.856747 master-0 kubenswrapper[3991]: E0318 09:53:59.856556 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": container with ID starting with e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead not found: ID does not exist" containerID="e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead" Mar 18 09:53:59.856747 master-0 kubenswrapper[3991]: I0318 09:53:59.856573 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} err="failed to get container status \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": rpc error: code = NotFound desc = could not find container \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": container with ID starting with e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead not found: ID does not exist" Mar 18 09:53:59.856747 master-0 kubenswrapper[3991]: I0318 09:53:59.856586 3991 scope.go:117] "RemoveContainer" containerID="b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7" Mar 18 09:53:59.856942 master-0 kubenswrapper[3991]: E0318 09:53:59.856913 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": container with ID starting with b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7 not found: ID does not exist" containerID="b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7" Mar 18 09:53:59.857003 master-0 kubenswrapper[3991]: I0318 09:53:59.856950 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} err="failed to get container status \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": rpc error: code = NotFound desc = could not find container \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": container with ID starting with b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7 not found: ID does not exist" Mar 18 09:53:59.857003 master-0 kubenswrapper[3991]: I0318 09:53:59.856978 3991 scope.go:117] "RemoveContainer" containerID="232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f" Mar 18 09:53:59.857310 master-0 kubenswrapper[3991]: E0318 09:53:59.857249 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": container with ID starting with 232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f not found: ID does not exist" containerID="232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f" Mar 18 09:53:59.857310 master-0 kubenswrapper[3991]: I0318 09:53:59.857277 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} err="failed to get container status \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": rpc error: code = NotFound desc = could not find container \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": container with ID starting with 232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f not found: ID does not exist" Mar 18 09:53:59.857310 master-0 kubenswrapper[3991]: I0318 09:53:59.857291 3991 scope.go:117] "RemoveContainer" containerID="f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a" Mar 18 09:53:59.857651 master-0 kubenswrapper[3991]: E0318 09:53:59.857587 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a\": container with ID starting with f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a not found: ID does not exist" containerID="f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a" Mar 18 09:53:59.857651 master-0 kubenswrapper[3991]: I0318 09:53:59.857630 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} err="failed to get container status \"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a\": rpc error: code = NotFound desc = could not find container \"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a\": container with ID starting with f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a not found: ID does not exist" Mar 18 09:53:59.857844 master-0 kubenswrapper[3991]: I0318 09:53:59.857655 3991 scope.go:117] "RemoveContainer" containerID="fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e" Mar 18 09:53:59.858139 master-0 kubenswrapper[3991]: E0318 09:53:59.858112 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e\": container with ID starting with fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e not found: ID does not exist" containerID="fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e" Mar 18 09:53:59.858139 master-0 kubenswrapper[3991]: I0318 09:53:59.858140 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} err="failed to get container status \"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e\": rpc error: code = NotFound desc = could not find container \"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e\": container with ID starting with fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e not found: ID does not exist" Mar 18 09:53:59.858139 master-0 kubenswrapper[3991]: I0318 09:53:59.858158 3991 scope.go:117] "RemoveContainer" containerID="ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e" Mar 18 09:53:59.858488 master-0 kubenswrapper[3991]: E0318 09:53:59.858453 3991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e\": container with ID starting with ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e not found: ID does not exist" containerID="ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e" Mar 18 09:53:59.858488 master-0 kubenswrapper[3991]: I0318 09:53:59.858478 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e"} err="failed to get container status \"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e\": rpc error: code = NotFound desc = could not find container \"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e\": container with ID starting with ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e not found: ID does not exist" Mar 18 09:53:59.858586 master-0 kubenswrapper[3991]: I0318 09:53:59.858495 3991 scope.go:117] "RemoveContainer" containerID="b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1" Mar 18 09:53:59.858787 master-0 kubenswrapper[3991]: I0318 09:53:59.858758 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} err="failed to get container status \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": rpc error: code = NotFound desc = could not find container \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": container with ID starting with b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1 not found: ID does not exist" Mar 18 09:53:59.858787 master-0 kubenswrapper[3991]: I0318 09:53:59.858783 3991 scope.go:117] "RemoveContainer" containerID="141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981" Mar 18 09:53:59.859097 master-0 kubenswrapper[3991]: I0318 09:53:59.859059 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} err="failed to get container status \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": rpc error: code = NotFound desc = could not find container \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": container with ID starting with 141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981 not found: ID does not exist" Mar 18 09:53:59.859097 master-0 kubenswrapper[3991]: I0318 09:53:59.859086 3991 scope.go:117] "RemoveContainer" containerID="3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286" Mar 18 09:53:59.859551 master-0 kubenswrapper[3991]: I0318 09:53:59.859516 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} err="failed to get container status \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": rpc error: code = NotFound desc = could not find container \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": container with ID starting with 3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286 not found: ID does not exist" Mar 18 09:53:59.859551 master-0 kubenswrapper[3991]: I0318 09:53:59.859537 3991 scope.go:117] "RemoveContainer" containerID="e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead" Mar 18 09:53:59.859809 master-0 kubenswrapper[3991]: I0318 09:53:59.859772 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} err="failed to get container status \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": rpc error: code = NotFound desc = could not find container \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": container with ID starting with e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead not found: ID does not exist" Mar 18 09:53:59.859809 master-0 kubenswrapper[3991]: I0318 09:53:59.859797 3991 scope.go:117] "RemoveContainer" containerID="b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7" Mar 18 09:53:59.860141 master-0 kubenswrapper[3991]: I0318 09:53:59.860055 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} err="failed to get container status \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": rpc error: code = NotFound desc = could not find container \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": container with ID starting with b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7 not found: ID does not exist" Mar 18 09:53:59.860141 master-0 kubenswrapper[3991]: I0318 09:53:59.860100 3991 scope.go:117] "RemoveContainer" containerID="232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f" Mar 18 09:53:59.860431 master-0 kubenswrapper[3991]: I0318 09:53:59.860327 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} err="failed to get container status \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": rpc error: code = NotFound desc = could not find container \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": container with ID starting with 232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f not found: ID does not exist" Mar 18 09:53:59.860431 master-0 kubenswrapper[3991]: I0318 09:53:59.860368 3991 scope.go:117] "RemoveContainer" containerID="f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a" Mar 18 09:53:59.860696 master-0 kubenswrapper[3991]: I0318 09:53:59.860656 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} err="failed to get container status \"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a\": rpc error: code = NotFound desc = could not find container \"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a\": container with ID starting with f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a not found: ID does not exist" Mar 18 09:53:59.860696 master-0 kubenswrapper[3991]: I0318 09:53:59.860682 3991 scope.go:117] "RemoveContainer" containerID="fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e" Mar 18 09:53:59.861055 master-0 kubenswrapper[3991]: I0318 09:53:59.861016 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} err="failed to get container status \"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e\": rpc error: code = NotFound desc = could not find container \"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e\": container with ID starting with fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e not found: ID does not exist" Mar 18 09:53:59.861055 master-0 kubenswrapper[3991]: I0318 09:53:59.861043 3991 scope.go:117] "RemoveContainer" containerID="ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e" Mar 18 09:53:59.861369 master-0 kubenswrapper[3991]: I0318 09:53:59.861331 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e"} err="failed to get container status \"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e\": rpc error: code = NotFound desc = could not find container \"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e\": container with ID starting with ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e not found: ID does not exist" Mar 18 09:53:59.861422 master-0 kubenswrapper[3991]: I0318 09:53:59.861355 3991 scope.go:117] "RemoveContainer" containerID="b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1" Mar 18 09:53:59.861698 master-0 kubenswrapper[3991]: I0318 09:53:59.861663 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} err="failed to get container status \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": rpc error: code = NotFound desc = could not find container \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": container with ID starting with b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1 not found: ID does not exist" Mar 18 09:53:59.861698 master-0 kubenswrapper[3991]: I0318 09:53:59.861684 3991 scope.go:117] "RemoveContainer" containerID="141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981" Mar 18 09:53:59.862099 master-0 kubenswrapper[3991]: I0318 09:53:59.862047 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} err="failed to get container status \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": rpc error: code = NotFound desc = could not find container \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": container with ID starting with 141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981 not found: ID does not exist" Mar 18 09:53:59.862099 master-0 kubenswrapper[3991]: I0318 09:53:59.862089 3991 scope.go:117] "RemoveContainer" containerID="3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286" Mar 18 09:53:59.862440 master-0 kubenswrapper[3991]: I0318 09:53:59.862398 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} err="failed to get container status \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": rpc error: code = NotFound desc = could not find container \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": container with ID starting with 3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286 not found: ID does not exist" Mar 18 09:53:59.862440 master-0 kubenswrapper[3991]: I0318 09:53:59.862427 3991 scope.go:117] "RemoveContainer" containerID="e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead" Mar 18 09:53:59.862907 master-0 kubenswrapper[3991]: I0318 09:53:59.862854 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} err="failed to get container status \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": rpc error: code = NotFound desc = could not find container \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": container with ID starting with e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead not found: ID does not exist" Mar 18 09:53:59.862907 master-0 kubenswrapper[3991]: I0318 09:53:59.862892 3991 scope.go:117] "RemoveContainer" containerID="b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7" Mar 18 09:53:59.863664 master-0 kubenswrapper[3991]: I0318 09:53:59.863629 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} err="failed to get container status \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": rpc error: code = NotFound desc = could not find container \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": container with ID starting with b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7 not found: ID does not exist" Mar 18 09:53:59.863664 master-0 kubenswrapper[3991]: I0318 09:53:59.863649 3991 scope.go:117] "RemoveContainer" containerID="232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f" Mar 18 09:53:59.864194 master-0 kubenswrapper[3991]: I0318 09:53:59.864130 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} err="failed to get container status \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": rpc error: code = NotFound desc = could not find container \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": container with ID starting with 232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f not found: ID does not exist" Mar 18 09:53:59.864194 master-0 kubenswrapper[3991]: I0318 09:53:59.864185 3991 scope.go:117] "RemoveContainer" containerID="f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a" Mar 18 09:53:59.864658 master-0 kubenswrapper[3991]: I0318 09:53:59.864615 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} err="failed to get container status \"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a\": rpc error: code = NotFound desc = could not find container \"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a\": container with ID starting with f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a not found: ID does not exist" Mar 18 09:53:59.864658 master-0 kubenswrapper[3991]: I0318 09:53:59.864638 3991 scope.go:117] "RemoveContainer" containerID="fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e" Mar 18 09:53:59.864902 master-0 kubenswrapper[3991]: I0318 09:53:59.864871 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} err="failed to get container status \"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e\": rpc error: code = NotFound desc = could not find container \"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e\": container with ID starting with fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e not found: ID does not exist" Mar 18 09:53:59.864902 master-0 kubenswrapper[3991]: I0318 09:53:59.864889 3991 scope.go:117] "RemoveContainer" containerID="ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e" Mar 18 09:53:59.865178 master-0 kubenswrapper[3991]: I0318 09:53:59.865139 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e"} err="failed to get container status \"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e\": rpc error: code = NotFound desc = could not find container \"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e\": container with ID starting with ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e not found: ID does not exist" Mar 18 09:53:59.865178 master-0 kubenswrapper[3991]: I0318 09:53:59.865165 3991 scope.go:117] "RemoveContainer" containerID="b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1" Mar 18 09:53:59.865429 master-0 kubenswrapper[3991]: I0318 09:53:59.865392 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} err="failed to get container status \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": rpc error: code = NotFound desc = could not find container \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": container with ID starting with b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1 not found: ID does not exist" Mar 18 09:53:59.865429 master-0 kubenswrapper[3991]: I0318 09:53:59.865418 3991 scope.go:117] "RemoveContainer" containerID="141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981" Mar 18 09:53:59.865845 master-0 kubenswrapper[3991]: I0318 09:53:59.865789 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} err="failed to get container status \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": rpc error: code = NotFound desc = could not find container \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": container with ID starting with 141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981 not found: ID does not exist" Mar 18 09:53:59.865845 master-0 kubenswrapper[3991]: I0318 09:53:59.865815 3991 scope.go:117] "RemoveContainer" containerID="3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286" Mar 18 09:53:59.866443 master-0 kubenswrapper[3991]: I0318 09:53:59.866369 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} err="failed to get container status \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": rpc error: code = NotFound desc = could not find container \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": container with ID starting with 3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286 not found: ID does not exist" Mar 18 09:53:59.866499 master-0 kubenswrapper[3991]: I0318 09:53:59.866442 3991 scope.go:117] "RemoveContainer" containerID="e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead" Mar 18 09:53:59.866935 master-0 kubenswrapper[3991]: I0318 09:53:59.866887 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} err="failed to get container status \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": rpc error: code = NotFound desc = could not find container \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": container with ID starting with e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead not found: ID does not exist" Mar 18 09:53:59.866935 master-0 kubenswrapper[3991]: I0318 09:53:59.866925 3991 scope.go:117] "RemoveContainer" containerID="b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7" Mar 18 09:53:59.867311 master-0 kubenswrapper[3991]: I0318 09:53:59.867271 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} err="failed to get container status \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": rpc error: code = NotFound desc = could not find container \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": container with ID starting with b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7 not found: ID does not exist" Mar 18 09:53:59.867311 master-0 kubenswrapper[3991]: I0318 09:53:59.867301 3991 scope.go:117] "RemoveContainer" containerID="232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f" Mar 18 09:53:59.867585 master-0 kubenswrapper[3991]: I0318 09:53:59.867553 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} err="failed to get container status \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": rpc error: code = NotFound desc = could not find container \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": container with ID starting with 232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f not found: ID does not exist" Mar 18 09:53:59.867585 master-0 kubenswrapper[3991]: I0318 09:53:59.867574 3991 scope.go:117] "RemoveContainer" containerID="f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a" Mar 18 09:53:59.867925 master-0 kubenswrapper[3991]: I0318 09:53:59.867887 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a"} err="failed to get container status \"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a\": rpc error: code = NotFound desc = could not find container \"f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a\": container with ID starting with f3b713b1d44464a6aa1ac327af12aa163f7b3fdff26047cb55c6b7caf9ff0a3a not found: ID does not exist" Mar 18 09:53:59.867925 master-0 kubenswrapper[3991]: I0318 09:53:59.867916 3991 scope.go:117] "RemoveContainer" containerID="fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e" Mar 18 09:53:59.868261 master-0 kubenswrapper[3991]: I0318 09:53:59.868217 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e"} err="failed to get container status \"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e\": rpc error: code = NotFound desc = could not find container \"fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e\": container with ID starting with fa248870d055c3bba2ba5df757728f5a762bb886b6135fb6dce6f1a7437e400e not found: ID does not exist" Mar 18 09:53:59.868261 master-0 kubenswrapper[3991]: I0318 09:53:59.868244 3991 scope.go:117] "RemoveContainer" containerID="ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e" Mar 18 09:53:59.868564 master-0 kubenswrapper[3991]: I0318 09:53:59.868517 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e"} err="failed to get container status \"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e\": rpc error: code = NotFound desc = could not find container \"ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e\": container with ID starting with ab4fe89b410d3e37696ed45ba842ec613441fe4ca4d4d1de69ac7d8e4cdd191e not found: ID does not exist" Mar 18 09:53:59.868564 master-0 kubenswrapper[3991]: I0318 09:53:59.868549 3991 scope.go:117] "RemoveContainer" containerID="b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1" Mar 18 09:53:59.868883 master-0 kubenswrapper[3991]: I0318 09:53:59.868852 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1"} err="failed to get container status \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": rpc error: code = NotFound desc = could not find container \"b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1\": container with ID starting with b474b35e70a8ce1003693763418348b25f6406b3743da804f24f4297ae934ea1 not found: ID does not exist" Mar 18 09:53:59.868883 master-0 kubenswrapper[3991]: I0318 09:53:59.868880 3991 scope.go:117] "RemoveContainer" containerID="141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981" Mar 18 09:53:59.869180 master-0 kubenswrapper[3991]: I0318 09:53:59.869145 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981"} err="failed to get container status \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": rpc error: code = NotFound desc = could not find container \"141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981\": container with ID starting with 141cda6ca47029304a4dd9a22718610f3655c43cfa3445d72dbbf68aa253c981 not found: ID does not exist" Mar 18 09:53:59.869180 master-0 kubenswrapper[3991]: I0318 09:53:59.869169 3991 scope.go:117] "RemoveContainer" containerID="3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286" Mar 18 09:53:59.869470 master-0 kubenswrapper[3991]: I0318 09:53:59.869434 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286"} err="failed to get container status \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": rpc error: code = NotFound desc = could not find container \"3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286\": container with ID starting with 3cc462c25dce56b5269f691934769f56d31eeb7b6e98dfaa1286e02d7c039286 not found: ID does not exist" Mar 18 09:53:59.869470 master-0 kubenswrapper[3991]: I0318 09:53:59.869462 3991 scope.go:117] "RemoveContainer" containerID="e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead" Mar 18 09:53:59.869784 master-0 kubenswrapper[3991]: I0318 09:53:59.869749 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead"} err="failed to get container status \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": rpc error: code = NotFound desc = could not find container \"e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead\": container with ID starting with e0b697dde9171713170ddafcea948d6192f62b0a9730d4ed35aad5119c9f9ead not found: ID does not exist" Mar 18 09:53:59.869784 master-0 kubenswrapper[3991]: I0318 09:53:59.869773 3991 scope.go:117] "RemoveContainer" containerID="b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7" Mar 18 09:53:59.870106 master-0 kubenswrapper[3991]: I0318 09:53:59.870069 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7"} err="failed to get container status \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": rpc error: code = NotFound desc = could not find container \"b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7\": container with ID starting with b3b0b7319666009677448aacc940416adc385c8030240111939e0b9d045b0dd7 not found: ID does not exist" Mar 18 09:53:59.870106 master-0 kubenswrapper[3991]: I0318 09:53:59.870096 3991 scope.go:117] "RemoveContainer" containerID="232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f" Mar 18 09:53:59.870381 master-0 kubenswrapper[3991]: I0318 09:53:59.870343 3991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f"} err="failed to get container status \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": rpc error: code = NotFound desc = could not find container \"232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f\": container with ID starting with 232d965eb87741730bba60388cb6e68fbc9122ada08c5984f898bfa06af4144f not found: ID does not exist" Mar 18 09:54:00.149390 master-0 kubenswrapper[3991]: I0318 09:54:00.149314 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:00.149641 master-0 kubenswrapper[3991]: E0318 09:54:00.149464 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:54:00.690771 master-0 kubenswrapper[3991]: I0318 09:54:00.690679 3991 generic.go:334] "Generic (PLEG): container finished" podID="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" containerID="273c8765db6facd550b6e56f450546d9b1b71f8e90628bc1352e6d3fe67f7a08" exitCode=0 Mar 18 09:54:00.690771 master-0 kubenswrapper[3991]: I0318 09:54:00.690724 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerDied","Data":"273c8765db6facd550b6e56f450546d9b1b71f8e90628bc1352e6d3fe67f7a08"} Mar 18 09:54:01.149743 master-0 kubenswrapper[3991]: I0318 09:54:01.149658 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:01.150135 master-0 kubenswrapper[3991]: E0318 09:54:01.149787 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:54:01.153358 master-0 kubenswrapper[3991]: I0318 09:54:01.153280 3991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd" path="/var/lib/kubelet/pods/ed0d20a9-cfe9-47fd-a4ca-cb04b881e7fd/volumes" Mar 18 09:54:01.695324 master-0 kubenswrapper[3991]: I0318 09:54:01.695276 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xgdvw_03de1ea6-da57-4e13-8e5a-d5e10a9f9957/kube-multus/0.log" Mar 18 09:54:01.695859 master-0 kubenswrapper[3991]: I0318 09:54:01.695354 3991 generic.go:334] "Generic (PLEG): container finished" podID="03de1ea6-da57-4e13-8e5a-d5e10a9f9957" containerID="2da220e2852846e9b471d19bf3329629d81b1d881746691dfdddb60fd750adba" exitCode=1 Mar 18 09:54:01.695859 master-0 kubenswrapper[3991]: I0318 09:54:01.695466 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xgdvw" event={"ID":"03de1ea6-da57-4e13-8e5a-d5e10a9f9957","Type":"ContainerDied","Data":"2da220e2852846e9b471d19bf3329629d81b1d881746691dfdddb60fd750adba"} Mar 18 09:54:01.696244 master-0 kubenswrapper[3991]: I0318 09:54:01.696204 3991 scope.go:117] "RemoveContainer" containerID="2da220e2852846e9b471d19bf3329629d81b1d881746691dfdddb60fd750adba" Mar 18 09:54:01.701575 master-0 kubenswrapper[3991]: I0318 09:54:01.700798 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerStarted","Data":"cfe5ef56f3dd6f3fe9dc135863b0976689b54a7c4d15b7855af44c94458e0a2d"} Mar 18 09:54:01.701575 master-0 kubenswrapper[3991]: I0318 09:54:01.700903 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerStarted","Data":"1eafd5e1d8075b3557d26376780d3a31ce7020f6adefde458c3f7ba1b936c538"} Mar 18 09:54:01.701575 master-0 kubenswrapper[3991]: I0318 09:54:01.700933 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerStarted","Data":"fbab2ff5a200889244ae8f5af315e78f18e97c28f777ce9d639601fa183325d7"} Mar 18 09:54:01.701575 master-0 kubenswrapper[3991]: I0318 09:54:01.700956 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerStarted","Data":"d1824933ca319e976a0bc34c452a04368e4bc85e7a0d24249620631730bc9d3a"} Mar 18 09:54:01.701575 master-0 kubenswrapper[3991]: I0318 09:54:01.700974 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerStarted","Data":"493cf82938c7c5ef2c7935745898c9bc5817938eb3ce9d3bd746076f2627c2e7"} Mar 18 09:54:01.701575 master-0 kubenswrapper[3991]: I0318 09:54:01.700991 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerStarted","Data":"064edc20882a4942d3337e90f9ad146d0988dd06bb904b529b5aae64b3f0ccf5"} Mar 18 09:54:02.149159 master-0 kubenswrapper[3991]: I0318 09:54:02.149091 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:02.149422 master-0 kubenswrapper[3991]: E0318 09:54:02.149280 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:54:02.220481 master-0 kubenswrapper[3991]: E0318 09:54:02.220330 3991 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 09:54:02.707880 master-0 kubenswrapper[3991]: I0318 09:54:02.707770 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xgdvw_03de1ea6-da57-4e13-8e5a-d5e10a9f9957/kube-multus/0.log" Mar 18 09:54:02.708733 master-0 kubenswrapper[3991]: I0318 09:54:02.707892 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xgdvw" event={"ID":"03de1ea6-da57-4e13-8e5a-d5e10a9f9957","Type":"ContainerStarted","Data":"7bf2c15191632567ded0f0a0cc42398e46b8b62c68e8abd9d189eb8b0b493d1c"} Mar 18 09:54:03.149806 master-0 kubenswrapper[3991]: I0318 09:54:03.149456 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:03.150015 master-0 kubenswrapper[3991]: E0318 09:54:03.149963 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:54:04.148998 master-0 kubenswrapper[3991]: I0318 09:54:04.148909 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:04.149783 master-0 kubenswrapper[3991]: E0318 09:54:04.149120 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:54:04.720189 master-0 kubenswrapper[3991]: I0318 09:54:04.720109 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerStarted","Data":"ed7d680f5ddca2afa8787b76893288007c8b1c18142651c0cc16460bba942d37"} Mar 18 09:54:05.150116 master-0 kubenswrapper[3991]: I0318 09:54:05.149982 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:05.150909 master-0 kubenswrapper[3991]: E0318 09:54:05.150195 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:54:06.149817 master-0 kubenswrapper[3991]: I0318 09:54:06.149715 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:06.150116 master-0 kubenswrapper[3991]: E0318 09:54:06.149918 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:54:07.149705 master-0 kubenswrapper[3991]: I0318 09:54:07.149607 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:07.150644 master-0 kubenswrapper[3991]: E0318 09:54:07.150549 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:54:07.222281 master-0 kubenswrapper[3991]: E0318 09:54:07.222192 3991 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 09:54:08.149081 master-0 kubenswrapper[3991]: I0318 09:54:08.148945 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:08.149383 master-0 kubenswrapper[3991]: E0318 09:54:08.149163 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:54:08.566234 master-0 kubenswrapper[3991]: I0318 09:54:08.566163 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:08.566975 master-0 kubenswrapper[3991]: E0318 09:54:08.566678 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:54:08.566975 master-0 kubenswrapper[3991]: E0318 09:54:08.566764 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:12.56673688 +0000 UTC m=+196.525676795 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 09:54:09.150101 master-0 kubenswrapper[3991]: I0318 09:54:09.149992 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:09.150342 master-0 kubenswrapper[3991]: E0318 09:54:09.150160 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:54:09.741864 master-0 kubenswrapper[3991]: I0318 09:54:09.741714 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" event={"ID":"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28","Type":"ContainerStarted","Data":"e15329788d5e4a197b414aacee8669d574e9c1917d4f1c6b3feb53e85d64ef2c"} Mar 18 09:54:10.149592 master-0 kubenswrapper[3991]: I0318 09:54:10.149532 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:10.149907 master-0 kubenswrapper[3991]: E0318 09:54:10.149661 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:54:10.747944 master-0 kubenswrapper[3991]: I0318 09:54:10.746968 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:10.747944 master-0 kubenswrapper[3991]: I0318 09:54:10.747002 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:10.747944 master-0 kubenswrapper[3991]: I0318 09:54:10.747010 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:10.772776 master-0 kubenswrapper[3991]: I0318 09:54:10.772708 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:10.774144 master-0 kubenswrapper[3991]: I0318 09:54:10.774104 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:11.045438 master-0 kubenswrapper[3991]: I0318 09:54:11.045326 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" podStartSLOduration=12.045293884 podStartE2EDuration="12.045293884s" podCreationTimestamp="2026-03-18 09:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:54:11.003472209 +0000 UTC m=+134.962412114" watchObservedRunningTime="2026-03-18 09:54:11.045293884 +0000 UTC m=+135.004233819" Mar 18 09:54:11.149769 master-0 kubenswrapper[3991]: I0318 09:54:11.149704 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:11.150092 master-0 kubenswrapper[3991]: E0318 09:54:11.149973 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:54:12.149859 master-0 kubenswrapper[3991]: I0318 09:54:12.149442 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:12.151022 master-0 kubenswrapper[3991]: E0318 09:54:12.149996 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:54:12.223267 master-0 kubenswrapper[3991]: E0318 09:54:12.223160 3991 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 09:54:13.149712 master-0 kubenswrapper[3991]: I0318 09:54:13.149631 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:13.149975 master-0 kubenswrapper[3991]: E0318 09:54:13.149817 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:54:14.150148 master-0 kubenswrapper[3991]: I0318 09:54:14.149408 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:14.150148 master-0 kubenswrapper[3991]: E0318 09:54:14.149610 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:54:15.149194 master-0 kubenswrapper[3991]: I0318 09:54:15.149102 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:15.149542 master-0 kubenswrapper[3991]: E0318 09:54:15.149247 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:54:16.150163 master-0 kubenswrapper[3991]: I0318 09:54:16.149985 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:16.151166 master-0 kubenswrapper[3991]: E0318 09:54:16.150274 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-42l55" podUID="74795f5d-dcd7-4723-8931-c34b59ce3087" Mar 18 09:54:17.149318 master-0 kubenswrapper[3991]: I0318 09:54:17.149226 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:17.150520 master-0 kubenswrapper[3991]: E0318 09:54:17.150459 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tbxt4" podUID="0442ec6c-5973-40a5-a0c3-dc02de46d343" Mar 18 09:54:18.150146 master-0 kubenswrapper[3991]: I0318 09:54:18.149957 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:18.152981 master-0 kubenswrapper[3991]: I0318 09:54:18.152878 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 09:54:18.153796 master-0 kubenswrapper[3991]: I0318 09:54:18.153400 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 09:54:19.152722 master-0 kubenswrapper[3991]: I0318 09:54:19.152599 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:19.155643 master-0 kubenswrapper[3991]: I0318 09:54:19.155587 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 09:54:21.744854 master-0 kubenswrapper[3991]: I0318 09:54:21.744435 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 18 09:54:21.795912 master-0 kubenswrapper[3991]: I0318 09:54:21.793846 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q"] Mar 18 09:54:21.795912 master-0 kubenswrapper[3991]: I0318 09:54:21.794626 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:21.797261 master-0 kubenswrapper[3991]: I0318 09:54:21.797206 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 09:54:21.797338 master-0 kubenswrapper[3991]: I0318 09:54:21.797251 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.798505 master-0 kubenswrapper[3991]: I0318 09:54:21.798452 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 09:54:21.809789 master-0 kubenswrapper[3991]: I0318 09:54:21.808947 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt"] Mar 18 09:54:21.809789 master-0 kubenswrapper[3991]: I0318 09:54:21.809691 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:21.817897 master-0 kubenswrapper[3991]: I0318 09:54:21.811275 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm"] Mar 18 09:54:21.817897 master-0 kubenswrapper[3991]: I0318 09:54:21.812130 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:21.817897 master-0 kubenswrapper[3991]: I0318 09:54:21.812157 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s"] Mar 18 09:54:21.836923 master-0 kubenswrapper[3991]: I0318 09:54:21.819326 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:21.836923 master-0 kubenswrapper[3991]: I0318 09:54:21.834344 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 09:54:21.836923 master-0 kubenswrapper[3991]: I0318 09:54:21.834732 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 09:54:21.836923 master-0 kubenswrapper[3991]: I0318 09:54:21.836509 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 09:54:21.836923 master-0 kubenswrapper[3991]: I0318 09:54:21.836918 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 09:54:21.837442 master-0 kubenswrapper[3991]: I0318 09:54:21.837155 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 09:54:21.837442 master-0 kubenswrapper[3991]: I0318 09:54:21.837230 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.837442 master-0 kubenswrapper[3991]: I0318 09:54:21.837273 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 09:54:21.837442 master-0 kubenswrapper[3991]: I0318 09:54:21.837371 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq"] Mar 18 09:54:21.837682 master-0 kubenswrapper[3991]: I0318 09:54:21.837493 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 09:54:21.838893 master-0 kubenswrapper[3991]: I0318 09:54:21.837839 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 09:54:21.838893 master-0 kubenswrapper[3991]: I0318 09:54:21.838219 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 09:54:21.838893 master-0 kubenswrapper[3991]: I0318 09:54:21.838428 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 09:54:21.838893 master-0 kubenswrapper[3991]: I0318 09:54:21.838564 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv"] Mar 18 09:54:21.845971 master-0 kubenswrapper[3991]: I0318 09:54:21.845801 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:21.846916 master-0 kubenswrapper[3991]: I0318 09:54:21.846874 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k"] Mar 18 09:54:21.847870 master-0 kubenswrapper[3991]: I0318 09:54:21.847265 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:21.847870 master-0 kubenswrapper[3991]: I0318 09:54:21.847528 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 09:54:21.847870 master-0 kubenswrapper[3991]: I0318 09:54:21.847526 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:21.850173 master-0 kubenswrapper[3991]: I0318 09:54:21.849410 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb"] Mar 18 09:54:21.850173 master-0 kubenswrapper[3991]: I0318 09:54:21.849788 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m"] Mar 18 09:54:21.850173 master-0 kubenswrapper[3991]: I0318 09:54:21.850135 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:21.850792 master-0 kubenswrapper[3991]: I0318 09:54:21.850384 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 09:54:21.850792 master-0 kubenswrapper[3991]: I0318 09:54:21.850638 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:21.850792 master-0 kubenswrapper[3991]: I0318 09:54:21.850732 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:54:21.851099 master-0 kubenswrapper[3991]: I0318 09:54:21.850961 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.851205 master-0 kubenswrapper[3991]: I0318 09:54:21.851151 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 09:54:21.851439 master-0 kubenswrapper[3991]: I0318 09:54:21.851380 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 09:54:21.851668 master-0 kubenswrapper[3991]: I0318 09:54:21.851602 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 09:54:21.852886 master-0 kubenswrapper[3991]: I0318 09:54:21.852443 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 09:54:21.852886 master-0 kubenswrapper[3991]: I0318 09:54:21.852704 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz"] Mar 18 09:54:21.855227 master-0 kubenswrapper[3991]: I0318 09:54:21.854709 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 09:54:21.855227 master-0 kubenswrapper[3991]: I0318 09:54:21.855067 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-2glpv"] Mar 18 09:54:21.855691 master-0 kubenswrapper[3991]: I0318 09:54:21.855311 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:21.855691 master-0 kubenswrapper[3991]: I0318 09:54:21.855580 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc"] Mar 18 09:54:21.856512 master-0 kubenswrapper[3991]: I0318 09:54:21.856233 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:21.856788 master-0 kubenswrapper[3991]: I0318 09:54:21.856569 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:21.859908 master-0 kubenswrapper[3991]: I0318 09:54:21.858187 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:54:21.859908 master-0 kubenswrapper[3991]: I0318 09:54:21.858396 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 09:54:21.859908 master-0 kubenswrapper[3991]: I0318 09:54:21.858509 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 09:54:21.859908 master-0 kubenswrapper[3991]: I0318 09:54:21.858716 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 09:54:21.859908 master-0 kubenswrapper[3991]: I0318 09:54:21.858981 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 09:54:21.859908 master-0 kubenswrapper[3991]: I0318 09:54:21.859848 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 09:54:21.859908 master-0 kubenswrapper[3991]: I0318 09:54:21.859875 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.859908 master-0 kubenswrapper[3991]: I0318 09:54:21.859885 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 09:54:21.862940 master-0 kubenswrapper[3991]: I0318 09:54:21.862693 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:54:21.862940 master-0 kubenswrapper[3991]: I0318 09:54:21.862787 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 09:54:21.863243 master-0 kubenswrapper[3991]: I0318 09:54:21.863128 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 09:54:21.865041 master-0 kubenswrapper[3991]: I0318 09:54:21.864205 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 09:54:21.868909 master-0 kubenswrapper[3991]: I0318 09:54:21.865666 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 09:54:21.868909 master-0 kubenswrapper[3991]: I0318 09:54:21.866092 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 09:54:21.868909 master-0 kubenswrapper[3991]: I0318 09:54:21.866101 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.871839 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698"] Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.872125 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-495pg"] Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.872343 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6"] Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.872366 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.872630 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.874251 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.874746 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m"] Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.875194 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr"] Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.875353 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.875656 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-jrmkr"] Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.875351 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.875866 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.875926 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.877420 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr"] Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.877667 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c"] Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.877955 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.878328 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.878582 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.878953 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq"] Mar 18 09:54:21.880380 master-0 kubenswrapper[3991]: I0318 09:54:21.879417 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" Mar 18 09:54:21.890919 master-0 kubenswrapper[3991]: I0318 09:54:21.890885 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 09:54:21.894296 master-0 kubenswrapper[3991]: I0318 09:54:21.893589 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.894634 master-0 kubenswrapper[3991]: I0318 09:54:21.893630 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 09:54:21.894730 master-0 kubenswrapper[3991]: I0318 09:54:21.893671 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 09:54:21.894899 master-0 kubenswrapper[3991]: I0318 09:54:21.893738 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.895119 master-0 kubenswrapper[3991]: I0318 09:54:21.893842 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 09:54:21.895312 master-0 kubenswrapper[3991]: I0318 09:54:21.895295 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2"] Mar 18 09:54:21.895398 master-0 kubenswrapper[3991]: I0318 09:54:21.893953 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 09:54:21.895504 master-0 kubenswrapper[3991]: I0318 09:54:21.893961 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 09:54:21.895595 master-0 kubenswrapper[3991]: I0318 09:54:21.894132 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 09:54:21.896354 master-0 kubenswrapper[3991]: I0318 09:54:21.894172 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.896354 master-0 kubenswrapper[3991]: I0318 09:54:21.894216 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 09:54:21.896354 master-0 kubenswrapper[3991]: I0318 09:54:21.894465 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 09:54:21.896354 master-0 kubenswrapper[3991]: I0318 09:54:21.894476 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 09:54:21.897076 master-0 kubenswrapper[3991]: I0318 09:54:21.897039 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 09:54:21.897611 master-0 kubenswrapper[3991]: I0318 09:54:21.897598 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:21.897999 master-0 kubenswrapper[3991]: I0318 09:54:21.897984 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q"] Mar 18 09:54:21.904300 master-0 kubenswrapper[3991]: I0318 09:54:21.902577 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt"] Mar 18 09:54:21.904300 master-0 kubenswrapper[3991]: I0318 09:54:21.902660 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq"] Mar 18 09:54:21.909589 master-0 kubenswrapper[3991]: I0318 09:54:21.908551 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.909589 master-0 kubenswrapper[3991]: I0318 09:54:21.908575 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.909589 master-0 kubenswrapper[3991]: I0318 09:54:21.908613 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 09:54:21.909589 master-0 kubenswrapper[3991]: I0318 09:54:21.908860 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 09:54:21.909589 master-0 kubenswrapper[3991]: I0318 09:54:21.909518 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 09:54:21.911332 master-0 kubenswrapper[3991]: I0318 09:54:21.910321 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 09:54:21.911332 master-0 kubenswrapper[3991]: I0318 09:54:21.910559 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 09:54:21.911332 master-0 kubenswrapper[3991]: I0318 09:54:21.910714 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 09:54:21.911332 master-0 kubenswrapper[3991]: I0318 09:54:21.910933 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.911332 master-0 kubenswrapper[3991]: I0318 09:54:21.911063 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 09:54:21.911586 master-0 kubenswrapper[3991]: I0318 09:54:21.911492 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.912334 master-0 kubenswrapper[3991]: I0318 09:54:21.912296 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv"] Mar 18 09:54:21.913725 master-0 kubenswrapper[3991]: I0318 09:54:21.913230 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s"] Mar 18 09:54:21.914499 master-0 kubenswrapper[3991]: I0318 09:54:21.914478 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm"] Mar 18 09:54:21.915223 master-0 kubenswrapper[3991]: I0318 09:54:21.915198 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-2glpv"] Mar 18 09:54:21.916159 master-0 kubenswrapper[3991]: I0318 09:54:21.916113 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc"] Mar 18 09:54:21.916999 master-0 kubenswrapper[3991]: I0318 09:54:21.916965 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz"] Mar 18 09:54:21.917996 master-0 kubenswrapper[3991]: I0318 09:54:21.917975 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k"] Mar 18 09:54:21.918714 master-0 kubenswrapper[3991]: I0318 09:54:21.918692 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-r7h65"] Mar 18 09:54:21.919183 master-0 kubenswrapper[3991]: I0318 09:54:21.919164 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:21.921379 master-0 kubenswrapper[3991]: I0318 09:54:21.919534 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m"] Mar 18 09:54:21.921775 master-0 kubenswrapper[3991]: I0318 09:54:21.921751 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 09:54:21.922738 master-0 kubenswrapper[3991]: I0318 09:54:21.922442 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 09:54:21.923439 master-0 kubenswrapper[3991]: I0318 09:54:21.923152 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2"] Mar 18 09:54:21.923439 master-0 kubenswrapper[3991]: I0318 09:54:21.923271 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 09:54:21.923439 master-0 kubenswrapper[3991]: I0318 09:54:21.923311 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:54:21.925052 master-0 kubenswrapper[3991]: I0318 09:54:21.925016 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb"] Mar 18 09:54:21.926080 master-0 kubenswrapper[3991]: I0318 09:54:21.926050 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-495pg"] Mar 18 09:54:21.927192 master-0 kubenswrapper[3991]: I0318 09:54:21.927152 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr"] Mar 18 09:54:21.928304 master-0 kubenswrapper[3991]: I0318 09:54:21.928256 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-jrmkr"] Mar 18 09:54:21.928931 master-0 kubenswrapper[3991]: I0318 09:54:21.928916 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698"] Mar 18 09:54:21.931103 master-0 kubenswrapper[3991]: I0318 09:54:21.931075 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6"] Mar 18 09:54:21.936492 master-0 kubenswrapper[3991]: I0318 09:54:21.936477 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 09:54:21.938482 master-0 kubenswrapper[3991]: I0318 09:54:21.938427 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c"] Mar 18 09:54:21.939338 master-0 kubenswrapper[3991]: I0318 09:54:21.939258 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 09:54:21.939426 master-0 kubenswrapper[3991]: I0318 09:54:21.939389 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 09:54:21.939594 master-0 kubenswrapper[3991]: I0318 09:54:21.939504 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 09:54:21.939594 master-0 kubenswrapper[3991]: I0318 09:54:21.939524 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 09:54:21.940610 master-0 kubenswrapper[3991]: I0318 09:54:21.940306 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 09:54:21.940876 master-0 kubenswrapper[3991]: I0318 09:54:21.940847 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr"] Mar 18 09:54:21.941728 master-0 kubenswrapper[3991]: I0318 09:54:21.941519 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq"] Mar 18 09:54:21.942172 master-0 kubenswrapper[3991]: I0318 09:54:21.942146 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m"] Mar 18 09:54:21.949175 master-0 kubenswrapper[3991]: I0318 09:54:21.949114 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 09:54:21.950395 master-0 kubenswrapper[3991]: I0318 09:54:21.950367 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958192 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/a078565a-6970-4f42-84f4-938f1d637245-kube-api-access-cxv6v\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958236 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958257 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a616d-012a-479e-ab3d-b21295ea1805-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958272 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-etcd-client\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958294 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx9p2\" (UniqueName: \"kubernetes.io/projected/db52ca42-e458-407f-9eeb-bf6de6405edc-kube-api-access-jx9p2\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958309 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958333 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shbrj\" (UniqueName: \"kubernetes.io/projected/6f266bad-8b30-4300-ad93-9d48e61f2440-kube-api-access-shbrj\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958353 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhzg4\" (UniqueName: \"kubernetes.io/projected/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-kube-api-access-lhzg4\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958373 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958392 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/accc57fb-75f5-4f89-9804-6ede7f77e27c-trusted-ca\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958411 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a616d-012a-479e-ab3d-b21295ea1805-config\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958429 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958449 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0999f781-3299-4cb6-ba76-2a4f4584c685-config\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958501 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:21.960488 master-0 kubenswrapper[3991]: I0318 09:54:21.958517 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0999f781-3299-4cb6-ba76-2a4f4584c685-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958533 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958549 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25k9g\" (UniqueName: \"kubernetes.io/projected/ee376320-9ca0-444d-ab37-9cbcb6729b11-kube-api-access-25k9g\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958564 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-operand-assets\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958582 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958598 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958623 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5dk8\" (UniqueName: \"kubernetes.io/projected/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-kube-api-access-p5dk8\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958642 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958658 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958675 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958700 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958725 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7tz\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-kube-api-access-tb7tz\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958749 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a6a616d-012a-479e-ab3d-b21295ea1805-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958769 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/d4d2218c-f9df-4d43-8727-ed3a920e23f7-kube-api-access-w4qp9\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:21.961066 master-0 kubenswrapper[3991]: I0318 09:54:21.958785 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:21.961537 master-0 kubenswrapper[3991]: I0318 09:54:21.958801 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-bound-sa-token\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:21.961537 master-0 kubenswrapper[3991]: I0318 09:54:21.958818 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-config\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:21.961537 master-0 kubenswrapper[3991]: I0318 09:54:21.958856 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0999f781-3299-4cb6-ba76-2a4f4584c685-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:21.961537 master-0 kubenswrapper[3991]: I0318 09:54:21.958873 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:21.961537 master-0 kubenswrapper[3991]: I0318 09:54:21.958890 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwfph\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-kube-api-access-nwfph\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:21.961537 master-0 kubenswrapper[3991]: I0318 09:54:21.958906 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-serving-cert\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:21.961537 master-0 kubenswrapper[3991]: I0318 09:54:21.958922 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ee99294-4785-49d0-b493-0d734cf09396-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:21.961537 master-0 kubenswrapper[3991]: I0318 09:54:21.958936 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-config\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:21.961537 master-0 kubenswrapper[3991]: I0318 09:54:21.958969 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:22.059535 master-0 kubenswrapper[3991]: I0318 09:54:22.059489 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-config\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.059719 master-0 kubenswrapper[3991]: I0318 09:54:22.059693 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:22.059771 master-0 kubenswrapper[3991]: I0318 09:54:22.059739 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-serving-cert\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.060371 master-0 kubenswrapper[3991]: E0318 09:54:22.059857 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:22.060371 master-0 kubenswrapper[3991]: E0318 09:54:22.059921 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.559899322 +0000 UTC m=+146.518839227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:22.060371 master-0 kubenswrapper[3991]: I0318 09:54:22.059990 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:22.060371 master-0 kubenswrapper[3991]: I0318 09:54:22.060083 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/a078565a-6970-4f42-84f4-938f1d637245-kube-api-access-cxv6v\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.060371 master-0 kubenswrapper[3991]: I0318 09:54:22.060194 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:22.060371 master-0 kubenswrapper[3991]: I0318 09:54:22.060241 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-config\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.060591 master-0 kubenswrapper[3991]: I0318 09:54:22.060500 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:22.060652 master-0 kubenswrapper[3991]: E0318 09:54:22.060625 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:22.060688 master-0 kubenswrapper[3991]: E0318 09:54:22.060665 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.56065225 +0000 UTC m=+146.519592145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:22.060757 master-0 kubenswrapper[3991]: I0318 09:54:22.060741 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a616d-012a-479e-ab3d-b21295ea1805-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:22.060835 master-0 kubenswrapper[3991]: I0318 09:54:22.060800 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-etcd-client\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.060972 master-0 kubenswrapper[3991]: I0318 09:54:22.060954 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62b82d72-d73c-451a-84e1-551d73036aa8-host-slash\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.061013 master-0 kubenswrapper[3991]: I0318 09:54:22.060978 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkzq9\" (UniqueName: \"kubernetes.io/projected/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-kube-api-access-dkzq9\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:22.061013 master-0 kubenswrapper[3991]: I0318 09:54:22.061001 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx9p2\" (UniqueName: \"kubernetes.io/projected/db52ca42-e458-407f-9eeb-bf6de6405edc-kube-api-access-jx9p2\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:22.061082 master-0 kubenswrapper[3991]: I0318 09:54:22.061018 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.061082 master-0 kubenswrapper[3991]: I0318 09:54:22.061033 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shbrj\" (UniqueName: \"kubernetes.io/projected/6f266bad-8b30-4300-ad93-9d48e61f2440-kube-api-access-shbrj\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:22.061082 master-0 kubenswrapper[3991]: I0318 09:54:22.061051 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f69a00b6-d908-4485-bb0d-57594fc01d24-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:22.062171 master-0 kubenswrapper[3991]: I0318 09:54:22.061554 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.062171 master-0 kubenswrapper[3991]: I0318 09:54:22.061624 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhzg4\" (UniqueName: \"kubernetes.io/projected/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-kube-api-access-lhzg4\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:22.062171 master-0 kubenswrapper[3991]: I0318 09:54:22.061679 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:22.062171 master-0 kubenswrapper[3991]: I0318 09:54:22.062016 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a616d-012a-479e-ab3d-b21295ea1805-config\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:22.062171 master-0 kubenswrapper[3991]: I0318 09:54:22.062064 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:22.062476 master-0 kubenswrapper[3991]: I0318 09:54:22.062431 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/accc57fb-75f5-4f89-9804-6ede7f77e27c-trusted-ca\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:22.062528 master-0 kubenswrapper[3991]: I0318 09:54:22.062515 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d72e695-0183-4ee8-8add-5425e67f7138-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.062556 master-0 kubenswrapper[3991]: I0318 09:54:22.062542 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2635254-a491-42e5-b598-461c24bf77ca-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.062670 master-0 kubenswrapper[3991]: I0318 09:54:22.062643 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2chb\" (UniqueName: \"kubernetes.io/projected/8cb5158f-2199-42c0-995a-8490c9ec8a95-kube-api-access-p2chb\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:22.062717 master-0 kubenswrapper[3991]: I0318 09:54:22.062671 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0999f781-3299-4cb6-ba76-2a4f4584c685-config\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:22.062883 master-0 kubenswrapper[3991]: I0318 09:54:22.062858 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a616d-012a-479e-ab3d-b21295ea1805-config\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:22.063293 master-0 kubenswrapper[3991]: I0318 09:54:22.063269 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:22.063769 master-0 kubenswrapper[3991]: I0318 09:54:22.063742 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4hfd\" (UniqueName: \"kubernetes.io/projected/c2635254-a491-42e5-b598-461c24bf77ca-kube-api-access-p4hfd\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.063850 master-0 kubenswrapper[3991]: I0318 09:54:22.063766 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/accc57fb-75f5-4f89-9804-6ede7f77e27c-trusted-ca\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:22.063850 master-0 kubenswrapper[3991]: I0318 09:54:22.063780 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.063850 master-0 kubenswrapper[3991]: I0318 09:54:22.063807 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb35841e-d992-4044-aaaa-06c9faf47bd0-serving-cert\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.063850 master-0 kubenswrapper[3991]: I0318 09:54:22.063845 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.064004 master-0 kubenswrapper[3991]: I0318 09:54:22.063877 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0999f781-3299-4cb6-ba76-2a4f4584c685-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:22.064054 master-0 kubenswrapper[3991]: I0318 09:54:22.064016 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.064099 master-0 kubenswrapper[3991]: I0318 09:54:22.064075 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d72e695-0183-4ee8-8add-5425e67f7138-config\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.064160 master-0 kubenswrapper[3991]: I0318 09:54:22.064130 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25k9g\" (UniqueName: \"kubernetes.io/projected/ee376320-9ca0-444d-ab37-9cbcb6729b11-kube-api-access-25k9g\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:22.064229 master-0 kubenswrapper[3991]: I0318 09:54:22.064185 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.064286 master-0 kubenswrapper[3991]: I0318 09:54:22.064252 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6bvr\" (UniqueName: \"kubernetes.io/projected/0d72e695-0183-4ee8-8add-5425e67f7138-kube-api-access-g6bvr\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.064359 master-0 kubenswrapper[3991]: I0318 09:54:22.064319 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-operand-assets\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:22.064532 master-0 kubenswrapper[3991]: I0318 09:54:22.064386 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.064532 master-0 kubenswrapper[3991]: I0318 09:54:22.064425 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-config\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.064532 master-0 kubenswrapper[3991]: I0318 09:54:22.064460 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:22.064532 master-0 kubenswrapper[3991]: I0318 09:54:22.064493 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25pg\" (UniqueName: \"kubernetes.io/projected/f076eaf0-b041-4db0-ba06-3d85e23bb654-kube-api-access-f25pg\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.064532 master-0 kubenswrapper[3991]: I0318 09:54:22.064526 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:22.064756 master-0 kubenswrapper[3991]: I0318 09:54:22.064575 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb35841e-d992-4044-aaaa-06c9faf47bd0-config\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.064756 master-0 kubenswrapper[3991]: I0318 09:54:22.064606 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f076eaf0-b041-4db0-ba06-3d85e23bb654-serving-cert\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.064756 master-0 kubenswrapper[3991]: I0318 09:54:22.064660 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7qd\" (UniqueName: \"kubernetes.io/projected/f69a00b6-d908-4485-bb0d-57594fc01d24-kube-api-access-5r7qd\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:22.064756 master-0 kubenswrapper[3991]: I0318 09:54:22.064744 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5dk8\" (UniqueName: \"kubernetes.io/projected/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-kube-api-access-p5dk8\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:22.065252 master-0 kubenswrapper[3991]: I0318 09:54:22.065116 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:22.065252 master-0 kubenswrapper[3991]: I0318 09:54:22.065169 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:22.065252 master-0 kubenswrapper[3991]: I0318 09:54:22.065211 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9sq\" (UniqueName: \"kubernetes.io/projected/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-kube-api-access-wj9sq\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.065446 master-0 kubenswrapper[3991]: E0318 09:54:22.065386 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:22.065446 master-0 kubenswrapper[3991]: I0318 09:54:22.065434 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-operand-assets\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:22.065754 master-0 kubenswrapper[3991]: E0318 09:54:22.065728 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.565491997 +0000 UTC m=+146.524431942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:22.065897 master-0 kubenswrapper[3991]: I0318 09:54:22.065801 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:22.065943 master-0 kubenswrapper[3991]: E0318 09:54:22.065897 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:22.065992 master-0 kubenswrapper[3991]: E0318 09:54:22.065942 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.565929637 +0000 UTC m=+146.524869632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:22.065992 master-0 kubenswrapper[3991]: I0318 09:54:22.065934 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb7tz\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-kube-api-access-tb7tz\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:22.066070 master-0 kubenswrapper[3991]: I0318 09:54:22.066004 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.066070 master-0 kubenswrapper[3991]: I0318 09:54:22.066003 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.066143 master-0 kubenswrapper[3991]: I0318 09:54:22.066073 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:22.066143 master-0 kubenswrapper[3991]: I0318 09:54:22.066103 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54f9\" (UniqueName: \"kubernetes.io/projected/8e812dd9-cd05-4e9e-8710-d0920181ece2-kube-api-access-s54f9\") pod \"csi-snapshot-controller-operator-5f5d689c6b-mqbmq\" (UID: \"8e812dd9-cd05-4e9e-8710-d0920181ece2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" Mar 18 09:54:22.066143 master-0 kubenswrapper[3991]: I0318 09:54:22.066130 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a6a616d-012a-479e-ab3d-b21295ea1805-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:22.066250 master-0 kubenswrapper[3991]: I0318 09:54:22.066152 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/d4d2218c-f9df-4d43-8727-ed3a920e23f7-kube-api-access-w4qp9\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:22.066550 master-0 kubenswrapper[3991]: I0318 09:54:22.066459 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.066550 master-0 kubenswrapper[3991]: I0318 09:54:22.066493 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:22.066550 master-0 kubenswrapper[3991]: I0318 09:54:22.066533 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.066704 master-0 kubenswrapper[3991]: I0318 09:54:22.066554 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlxfz\" (UniqueName: \"kubernetes.io/projected/bb35841e-d992-4044-aaaa-06c9faf47bd0-kube-api-access-zlxfz\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.066704 master-0 kubenswrapper[3991]: I0318 09:54:22.066577 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.066704 master-0 kubenswrapper[3991]: I0318 09:54:22.066621 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:22.066704 master-0 kubenswrapper[3991]: I0318 09:54:22.066680 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-bound-sa-token\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:22.066873 master-0 kubenswrapper[3991]: I0318 09:54:22.066704 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fjk8\" (UniqueName: \"kubernetes.io/projected/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-kube-api-access-9fjk8\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.066873 master-0 kubenswrapper[3991]: I0318 09:54:22.066734 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0999f781-3299-4cb6-ba76-2a4f4584c685-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:22.067728 master-0 kubenswrapper[3991]: E0318 09:54:22.067698 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:22.068506 master-0 kubenswrapper[3991]: I0318 09:54:22.068478 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:22.068572 master-0 kubenswrapper[3991]: I0318 09:54:22.068476 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:22.068572 master-0 kubenswrapper[3991]: E0318 09:54:22.068537 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.568497119 +0000 UTC m=+146.527437054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:22.068636 master-0 kubenswrapper[3991]: I0318 09:54:22.068599 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwfph\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-kube-api-access-nwfph\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:22.068732 master-0 kubenswrapper[3991]: I0318 09:54:22.068694 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-config\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:22.068813 master-0 kubenswrapper[3991]: I0318 09:54:22.068782 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-serving-cert\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.068813 master-0 kubenswrapper[3991]: I0318 09:54:22.068801 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:22.069141 master-0 kubenswrapper[3991]: I0318 09:54:22.069106 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ee99294-4785-49d0-b493-0d734cf09396-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:22.069187 master-0 kubenswrapper[3991]: I0318 09:54:22.069155 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvnrf\" (UniqueName: \"kubernetes.io/projected/62b82d72-d73c-451a-84e1-551d73036aa8-kube-api-access-lvnrf\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.069891 master-0 kubenswrapper[3991]: E0318 09:54:22.069854 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:22.070978 master-0 kubenswrapper[3991]: I0318 09:54:22.070933 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:22.071153 master-0 kubenswrapper[3991]: E0318 09:54:22.071099 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.57104435 +0000 UTC m=+146.529984285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:22.071220 master-0 kubenswrapper[3991]: I0318 09:54:22.070206 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-config\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:22.071644 master-0 kubenswrapper[3991]: I0318 09:54:22.071622 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0999f781-3299-4cb6-ba76-2a4f4584c685-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:22.072097 master-0 kubenswrapper[3991]: I0318 09:54:22.072078 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ee99294-4785-49d0-b493-0d734cf09396-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:22.072435 master-0 kubenswrapper[3991]: I0318 09:54:22.072415 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0999f781-3299-4cb6-ba76-2a4f4584c685-config\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:22.072603 master-0 kubenswrapper[3991]: I0318 09:54:22.072561 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-etcd-client\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.072901 master-0 kubenswrapper[3991]: I0318 09:54:22.072852 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-serving-cert\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.073201 master-0 kubenswrapper[3991]: I0318 09:54:22.073172 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:22.073375 master-0 kubenswrapper[3991]: I0318 09:54:22.073359 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a616d-012a-479e-ab3d-b21295ea1805-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:22.077285 master-0 kubenswrapper[3991]: I0318 09:54:22.077256 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.169782 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62b82d72-d73c-451a-84e1-551d73036aa8-host-slash\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.169896 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkzq9\" (UniqueName: \"kubernetes.io/projected/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-kube-api-access-dkzq9\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.169911 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62b82d72-d73c-451a-84e1-551d73036aa8-host-slash\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.169963 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f69a00b6-d908-4485-bb0d-57594fc01d24-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.170147 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2635254-a491-42e5-b598-461c24bf77ca-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.170171 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2chb\" (UniqueName: \"kubernetes.io/projected/8cb5158f-2199-42c0-995a-8490c9ec8a95-kube-api-access-p2chb\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.170188 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d72e695-0183-4ee8-8add-5425e67f7138-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.170240 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4hfd\" (UniqueName: \"kubernetes.io/projected/c2635254-a491-42e5-b598-461c24bf77ca-kube-api-access-p4hfd\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.170260 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.171028 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.171077 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb35841e-d992-4044-aaaa-06c9faf47bd0-serving-cert\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.171640 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f69a00b6-d908-4485-bb0d-57594fc01d24-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.171660 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.171751 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.173896 master-0 kubenswrapper[3991]: I0318 09:54:22.171795 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d72e695-0183-4ee8-8add-5425e67f7138-config\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.171872 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.171909 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6bvr\" (UniqueName: \"kubernetes.io/projected/0d72e695-0183-4ee8-8add-5425e67f7138-kube-api-access-g6bvr\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.171946 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-config\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: E0318 09:54:22.172081 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: E0318 09:54:22.172154 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.672130332 +0000 UTC m=+146.631070267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172293 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f25pg\" (UniqueName: \"kubernetes.io/projected/f076eaf0-b041-4db0-ba06-3d85e23bb654-kube-api-access-f25pg\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172335 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2635254-a491-42e5-b598-461c24bf77ca-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172348 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb35841e-d992-4044-aaaa-06c9faf47bd0-config\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172424 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f076eaf0-b041-4db0-ba06-3d85e23bb654-serving-cert\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172482 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7qd\" (UniqueName: \"kubernetes.io/projected/f69a00b6-d908-4485-bb0d-57594fc01d24-kube-api-access-5r7qd\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172553 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9sq\" (UniqueName: \"kubernetes.io/projected/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-kube-api-access-wj9sq\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172613 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172659 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s54f9\" (UniqueName: \"kubernetes.io/projected/8e812dd9-cd05-4e9e-8710-d0920181ece2-kube-api-access-s54f9\") pod \"csi-snapshot-controller-operator-5f5d689c6b-mqbmq\" (UID: \"8e812dd9-cd05-4e9e-8710-d0920181ece2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172699 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.174657 master-0 kubenswrapper[3991]: I0318 09:54:22.172734 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.172765 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.172811 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlxfz\" (UniqueName: \"kubernetes.io/projected/bb35841e-d992-4044-aaaa-06c9faf47bd0-kube-api-access-zlxfz\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.172870 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.172913 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fjk8\" (UniqueName: \"kubernetes.io/projected/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-kube-api-access-9fjk8\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.172971 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvnrf\" (UniqueName: \"kubernetes.io/projected/62b82d72-d73c-451a-84e1-551d73036aa8-kube-api-access-lvnrf\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.173020 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-serving-cert\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.173049 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.173116 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.173160 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-config\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: E0318 09:54:22.173276 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: E0318 09:54:22.173341 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.67331777 +0000 UTC m=+146.632257675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.173372 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb35841e-d992-4044-aaaa-06c9faf47bd0-config\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.174439 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d72e695-0183-4ee8-8add-5425e67f7138-config\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.175338 master-0 kubenswrapper[3991]: I0318 09:54:22.174507 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.175892 master-0 kubenswrapper[3991]: I0318 09:54:22.175763 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: I0318 09:54:22.176250 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d72e695-0183-4ee8-8add-5425e67f7138-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: I0318 09:54:22.176381 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: E0318 09:54:22.176449 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: E0318 09:54:22.176452 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: E0318 09:54:22.176484 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.676473336 +0000 UTC m=+146.635413231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: I0318 09:54:22.176489 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: E0318 09:54:22.176518 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.676492147 +0000 UTC m=+146.635432052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: E0318 09:54:22.176538 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: E0318 09:54:22.176675 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:22.676652081 +0000 UTC m=+146.635592016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:22.177191 master-0 kubenswrapper[3991]: I0318 09:54:22.177150 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.177914 master-0 kubenswrapper[3991]: I0318 09:54:22.177855 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb35841e-d992-4044-aaaa-06c9faf47bd0-serving-cert\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.180372 master-0 kubenswrapper[3991]: I0318 09:54:22.180308 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f076eaf0-b041-4db0-ba06-3d85e23bb654-serving-cert\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.181432 master-0 kubenswrapper[3991]: I0318 09:54:22.181387 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-serving-cert\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.340223 master-0 kubenswrapper[3991]: I0318 09:54:22.340170 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb7tz\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-kube-api-access-tb7tz\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:22.341481 master-0 kubenswrapper[3991]: I0318 09:54:22.341366 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25k9g\" (UniqueName: \"kubernetes.io/projected/ee376320-9ca0-444d-ab37-9cbcb6729b11-kube-api-access-25k9g\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:22.343705 master-0 kubenswrapper[3991]: I0318 09:54:22.343628 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:22.344029 master-0 kubenswrapper[3991]: I0318 09:54:22.343983 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-bound-sa-token\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:22.344887 master-0 kubenswrapper[3991]: I0318 09:54:22.344233 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/a078565a-6970-4f42-84f4-938f1d637245-kube-api-access-cxv6v\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.344887 master-0 kubenswrapper[3991]: I0318 09:54:22.344644 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlxfz\" (UniqueName: \"kubernetes.io/projected/bb35841e-d992-4044-aaaa-06c9faf47bd0-kube-api-access-zlxfz\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.344887 master-0 kubenswrapper[3991]: I0318 09:54:22.344751 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:22.358859 master-0 kubenswrapper[3991]: I0318 09:54:22.345375 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0999f781-3299-4cb6-ba76-2a4f4584c685-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:22.361872 master-0 kubenswrapper[3991]: I0318 09:54:22.360503 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a6a616d-012a-479e-ab3d-b21295ea1805-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:22.365625 master-0 kubenswrapper[3991]: I0318 09:54:22.365585 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/d4d2218c-f9df-4d43-8727-ed3a920e23f7-kube-api-access-w4qp9\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:22.365848 master-0 kubenswrapper[3991]: I0318 09:54:22.365790 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkzq9\" (UniqueName: \"kubernetes.io/projected/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-kube-api-access-dkzq9\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:22.369722 master-0 kubenswrapper[3991]: I0318 09:54:22.368498 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2chb\" (UniqueName: \"kubernetes.io/projected/8cb5158f-2199-42c0-995a-8490c9ec8a95-kube-api-access-p2chb\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:22.369722 master-0 kubenswrapper[3991]: I0318 09:54:22.369530 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6bvr\" (UniqueName: \"kubernetes.io/projected/0d72e695-0183-4ee8-8add-5425e67f7138-kube-api-access-g6bvr\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.372407 master-0 kubenswrapper[3991]: I0318 09:54:22.372351 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx9p2\" (UniqueName: \"kubernetes.io/projected/db52ca42-e458-407f-9eeb-bf6de6405edc-kube-api-access-jx9p2\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:22.373354 master-0 kubenswrapper[3991]: I0318 09:54:22.373314 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shbrj\" (UniqueName: \"kubernetes.io/projected/6f266bad-8b30-4300-ad93-9d48e61f2440-kube-api-access-shbrj\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:22.373578 master-0 kubenswrapper[3991]: I0318 09:54:22.373544 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s54f9\" (UniqueName: \"kubernetes.io/projected/8e812dd9-cd05-4e9e-8710-d0920181ece2-kube-api-access-s54f9\") pod \"csi-snapshot-controller-operator-5f5d689c6b-mqbmq\" (UID: \"8e812dd9-cd05-4e9e-8710-d0920181ece2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" Mar 18 09:54:22.373814 master-0 kubenswrapper[3991]: I0318 09:54:22.373780 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwfph\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-kube-api-access-nwfph\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:22.373971 master-0 kubenswrapper[3991]: I0318 09:54:22.373915 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhzg4\" (UniqueName: \"kubernetes.io/projected/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-kube-api-access-lhzg4\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:22.374043 master-0 kubenswrapper[3991]: I0318 09:54:22.374014 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5dk8\" (UniqueName: \"kubernetes.io/projected/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-kube-api-access-p5dk8\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:22.374954 master-0 kubenswrapper[3991]: I0318 09:54:22.374879 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fjk8\" (UniqueName: \"kubernetes.io/projected/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-kube-api-access-9fjk8\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.375971 master-0 kubenswrapper[3991]: I0318 09:54:22.375945 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvnrf\" (UniqueName: \"kubernetes.io/projected/62b82d72-d73c-451a-84e1-551d73036aa8-kube-api-access-lvnrf\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.378021 master-0 kubenswrapper[3991]: I0318 09:54:22.377959 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7qd\" (UniqueName: \"kubernetes.io/projected/f69a00b6-d908-4485-bb0d-57594fc01d24-kube-api-access-5r7qd\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:22.378096 master-0 kubenswrapper[3991]: I0318 09:54:22.377988 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4hfd\" (UniqueName: \"kubernetes.io/projected/c2635254-a491-42e5-b598-461c24bf77ca-kube-api-access-p4hfd\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.384349 master-0 kubenswrapper[3991]: I0318 09:54:22.379102 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:22.384349 master-0 kubenswrapper[3991]: I0318 09:54:22.381997 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9sq\" (UniqueName: \"kubernetes.io/projected/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-kube-api-access-wj9sq\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.388335 master-0 kubenswrapper[3991]: I0318 09:54:22.388286 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f25pg\" (UniqueName: \"kubernetes.io/projected/f076eaf0-b041-4db0-ba06-3d85e23bb654-kube-api-access-f25pg\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.411091 master-0 kubenswrapper[3991]: I0318 09:54:22.411026 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:22.429808 master-0 kubenswrapper[3991]: I0318 09:54:22.429769 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:22.439921 master-0 kubenswrapper[3991]: I0318 09:54:22.439023 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" Mar 18 09:54:22.441931 master-0 kubenswrapper[3991]: I0318 09:54:22.441892 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:22.458879 master-0 kubenswrapper[3991]: I0318 09:54:22.458846 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:22.465110 master-0 kubenswrapper[3991]: I0318 09:54:22.465055 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:22.494756 master-0 kubenswrapper[3991]: I0318 09:54:22.494374 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:22.580170 master-0 kubenswrapper[3991]: I0318 09:54:22.579691 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:22.580544 master-0 kubenswrapper[3991]: I0318 09:54:22.580512 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:22.580613 master-0 kubenswrapper[3991]: I0318 09:54:22.580560 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:22.580677 master-0 kubenswrapper[3991]: E0318 09:54:22.580656 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:22.580725 master-0 kubenswrapper[3991]: E0318 09:54:22.580712 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.58069119 +0000 UTC m=+147.539631085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:22.580796 master-0 kubenswrapper[3991]: I0318 09:54:22.580765 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:22.580796 master-0 kubenswrapper[3991]: I0318 09:54:22.580783 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:22.580905 master-0 kubenswrapper[3991]: I0318 09:54:22.580801 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:22.580905 master-0 kubenswrapper[3991]: I0318 09:54:22.580852 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:22.580905 master-0 kubenswrapper[3991]: E0318 09:54:22.580796 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:22.580905 master-0 kubenswrapper[3991]: E0318 09:54:22.580910 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.580901895 +0000 UTC m=+147.539841790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:22.581570 master-0 kubenswrapper[3991]: E0318 09:54:22.580914 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:22.581570 master-0 kubenswrapper[3991]: E0318 09:54:22.580958 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:22.581570 master-0 kubenswrapper[3991]: E0318 09:54:22.580975 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.580958047 +0000 UTC m=+147.539897942 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:22.581570 master-0 kubenswrapper[3991]: E0318 09:54:22.580864 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:22.581570 master-0 kubenswrapper[3991]: E0318 09:54:22.580991 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.580983677 +0000 UTC m=+147.539923572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:22.581570 master-0 kubenswrapper[3991]: E0318 09:54:22.581001 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:22.581570 master-0 kubenswrapper[3991]: E0318 09:54:22.581027 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.581011748 +0000 UTC m=+147.539951723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:22.581570 master-0 kubenswrapper[3991]: E0318 09:54:22.581045 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.581036388 +0000 UTC m=+147.539976393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:22.584573 master-0 kubenswrapper[3991]: I0318 09:54:22.582782 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:22.600884 master-0 kubenswrapper[3991]: I0318 09:54:22.600769 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:22.620117 master-0 kubenswrapper[3991]: I0318 09:54:22.618100 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:22.643983 master-0 kubenswrapper[3991]: I0318 09:54:22.638071 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:22.682578 master-0 kubenswrapper[3991]: I0318 09:54:22.682532 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.682578 master-0 kubenswrapper[3991]: I0318 09:54:22.682581 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:22.682813 master-0 kubenswrapper[3991]: I0318 09:54:22.682618 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:22.682813 master-0 kubenswrapper[3991]: I0318 09:54:22.682656 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:22.682813 master-0 kubenswrapper[3991]: I0318 09:54:22.682710 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:22.683035 master-0 kubenswrapper[3991]: E0318 09:54:22.682916 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:22.683035 master-0 kubenswrapper[3991]: E0318 09:54:22.682980 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.682957671 +0000 UTC m=+147.641897566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:22.683122 master-0 kubenswrapper[3991]: E0318 09:54:22.683037 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:22.683122 master-0 kubenswrapper[3991]: E0318 09:54:22.683064 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.683056033 +0000 UTC m=+147.641995928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:22.683122 master-0 kubenswrapper[3991]: E0318 09:54:22.683104 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:22.683122 master-0 kubenswrapper[3991]: E0318 09:54:22.683121 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.683115864 +0000 UTC m=+147.642055759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:22.683276 master-0 kubenswrapper[3991]: E0318 09:54:22.683154 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:22.683276 master-0 kubenswrapper[3991]: E0318 09:54:22.683172 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.683166666 +0000 UTC m=+147.642106561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:22.683276 master-0 kubenswrapper[3991]: E0318 09:54:22.683204 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:22.683276 master-0 kubenswrapper[3991]: E0318 09:54:22.683220 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:23.683215447 +0000 UTC m=+147.642155342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:22.796021 master-0 kubenswrapper[3991]: I0318 09:54:22.795976 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-r7h65" event={"ID":"62b82d72-d73c-451a-84e1-551d73036aa8","Type":"ContainerStarted","Data":"dfd0e7e42052e04911701599adae500aa7e091be93bca4bd99512045dd966402"} Mar 18 09:54:22.876711 master-0 kubenswrapper[3991]: I0318 09:54:22.876616 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c"] Mar 18 09:54:22.877745 master-0 kubenswrapper[3991]: I0318 09:54:22.877366 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr"] Mar 18 09:54:23.205433 master-0 kubenswrapper[3991]: I0318 09:54:23.205168 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq"] Mar 18 09:54:23.205433 master-0 kubenswrapper[3991]: I0318 09:54:23.205307 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq"] Mar 18 09:54:23.223037 master-0 kubenswrapper[3991]: W0318 09:54:23.221703 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e812dd9_cd05_4e9e_8710_d0920181ece2.slice/crio-22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415 WatchSource:0}: Error finding container 22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415: Status 404 returned error can't find the container with id 22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415 Mar 18 09:54:23.223473 master-0 kubenswrapper[3991]: I0318 09:54:23.223323 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt"] Mar 18 09:54:23.224205 master-0 kubenswrapper[3991]: W0318 09:54:23.224120 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3414fa1f_e4ee_4c7e_81cd_1fbd86486cd6.slice/crio-04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0 WatchSource:0}: Error finding container 04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0: Status 404 returned error can't find the container with id 04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0 Mar 18 09:54:23.227869 master-0 kubenswrapper[3991]: I0318 09:54:23.227693 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc"] Mar 18 09:54:23.231407 master-0 kubenswrapper[3991]: I0318 09:54:23.231334 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm"] Mar 18 09:54:23.466037 master-0 kubenswrapper[3991]: I0318 09:54:23.463751 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr"] Mar 18 09:54:23.466037 master-0 kubenswrapper[3991]: I0318 09:54:23.463866 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb"] Mar 18 09:54:23.466564 master-0 kubenswrapper[3991]: I0318 09:54:23.466084 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698"] Mar 18 09:54:23.467524 master-0 kubenswrapper[3991]: I0318 09:54:23.467313 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-495pg"] Mar 18 09:54:23.470929 master-0 kubenswrapper[3991]: I0318 09:54:23.470875 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q"] Mar 18 09:54:23.598412 master-0 kubenswrapper[3991]: I0318 09:54:23.598328 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:23.598412 master-0 kubenswrapper[3991]: I0318 09:54:23.598398 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:23.598697 master-0 kubenswrapper[3991]: E0318 09:54:23.598556 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:23.598697 master-0 kubenswrapper[3991]: I0318 09:54:23.598579 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:23.598697 master-0 kubenswrapper[3991]: E0318 09:54:23.598641 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.598613748 +0000 UTC m=+149.557553683 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:23.598935 master-0 kubenswrapper[3991]: I0318 09:54:23.598703 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:23.598935 master-0 kubenswrapper[3991]: I0318 09:54:23.598756 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:23.598935 master-0 kubenswrapper[3991]: E0318 09:54:23.598711 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:23.598935 master-0 kubenswrapper[3991]: E0318 09:54:23.598882 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:23.598935 master-0 kubenswrapper[3991]: E0318 09:54:23.598768 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:23.598935 master-0 kubenswrapper[3991]: E0318 09:54:23.598888 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.598870774 +0000 UTC m=+149.557810709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:23.599319 master-0 kubenswrapper[3991]: E0318 09:54:23.598942 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.598929725 +0000 UTC m=+149.557869660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:23.599319 master-0 kubenswrapper[3991]: E0318 09:54:23.598990 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.598977607 +0000 UTC m=+149.557917542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:23.599319 master-0 kubenswrapper[3991]: I0318 09:54:23.599034 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:23.599319 master-0 kubenswrapper[3991]: E0318 09:54:23.599197 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:23.599319 master-0 kubenswrapper[3991]: E0318 09:54:23.599233 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.599221462 +0000 UTC m=+149.558161397 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:23.599319 master-0 kubenswrapper[3991]: E0318 09:54:23.598950 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:23.599319 master-0 kubenswrapper[3991]: E0318 09:54:23.599274 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.599263823 +0000 UTC m=+149.558203748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:23.700115 master-0 kubenswrapper[3991]: I0318 09:54:23.699842 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:23.700115 master-0 kubenswrapper[3991]: I0318 09:54:23.699911 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:23.700115 master-0 kubenswrapper[3991]: I0318 09:54:23.699976 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:23.700115 master-0 kubenswrapper[3991]: E0318 09:54:23.700107 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:23.700471 master-0 kubenswrapper[3991]: E0318 09:54:23.700166 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.70014499 +0000 UTC m=+149.659084905 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:23.700471 master-0 kubenswrapper[3991]: I0318 09:54:23.700173 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:23.700471 master-0 kubenswrapper[3991]: E0318 09:54:23.700225 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:23.700471 master-0 kubenswrapper[3991]: E0318 09:54:23.700309 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:23.700471 master-0 kubenswrapper[3991]: E0318 09:54:23.700322 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.700292494 +0000 UTC m=+149.659232429 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:23.700471 master-0 kubenswrapper[3991]: E0318 09:54:23.700319 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:23.700471 master-0 kubenswrapper[3991]: E0318 09:54:23.700346 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.700335345 +0000 UTC m=+149.659275260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:23.700974 master-0 kubenswrapper[3991]: I0318 09:54:23.700510 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:23.700974 master-0 kubenswrapper[3991]: E0318 09:54:23.700592 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.7005681 +0000 UTC m=+149.659508025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:23.700974 master-0 kubenswrapper[3991]: E0318 09:54:23.700640 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:23.700974 master-0 kubenswrapper[3991]: E0318 09:54:23.700741 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:25.700712434 +0000 UTC m=+149.659652369 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:23.802966 master-0 kubenswrapper[3991]: I0318 09:54:23.802885 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" event={"ID":"d26036f1-bdce-4ec5-873f-962fa7e8e6c1","Type":"ContainerStarted","Data":"8f11956d88039b0b64ae7a326d73a1a29f38de2a62777ca3d744161f04878819"} Mar 18 09:54:23.804784 master-0 kubenswrapper[3991]: I0318 09:54:23.804723 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" event={"ID":"6a6a616d-012a-479e-ab3d-b21295ea1805","Type":"ContainerStarted","Data":"543fb2147aca575376ed7bd211cfca3f8a0e31f62df5e58bf47f4f7fc11fc303"} Mar 18 09:54:23.806037 master-0 kubenswrapper[3991]: I0318 09:54:23.805941 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" event={"ID":"f076eaf0-b041-4db0-ba06-3d85e23bb654","Type":"ContainerStarted","Data":"02d02240944e9230fa342b4b1030eceabc9b6ad789e1383eef1d657905cf15af"} Mar 18 09:54:23.807411 master-0 kubenswrapper[3991]: I0318 09:54:23.807363 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerStarted","Data":"2cf1bdb8eb09b95692725959e60306272582dc358e1d2a541fe6b5b5e57971c0"} Mar 18 09:54:23.808613 master-0 kubenswrapper[3991]: I0318 09:54:23.808567 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" event={"ID":"0999f781-3299-4cb6-ba76-2a4f4584c685","Type":"ContainerStarted","Data":"0d84a97391b20bbc1473efdc91b70735c4232a35d2754651bb0243ebf80ab3be"} Mar 18 09:54:23.810070 master-0 kubenswrapper[3991]: I0318 09:54:23.810026 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" event={"ID":"a078565a-6970-4f42-84f4-938f1d637245","Type":"ContainerStarted","Data":"613533c3a19224e9e30dba35639ecd39810b8db2f7864917803baa176a7bbed0"} Mar 18 09:54:23.811405 master-0 kubenswrapper[3991]: I0318 09:54:23.811364 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" event={"ID":"bb35841e-d992-4044-aaaa-06c9faf47bd0","Type":"ContainerStarted","Data":"3fdec4aed0d4d1e92fcea54e18530bddc4ceb0a577b38a5b2728e046e7e0d8a1"} Mar 18 09:54:23.812925 master-0 kubenswrapper[3991]: I0318 09:54:23.812891 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" event={"ID":"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6","Type":"ContainerStarted","Data":"04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0"} Mar 18 09:54:23.814206 master-0 kubenswrapper[3991]: I0318 09:54:23.814171 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" event={"ID":"8e812dd9-cd05-4e9e-8710-d0920181ece2","Type":"ContainerStarted","Data":"22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415"} Mar 18 09:54:23.815538 master-0 kubenswrapper[3991]: I0318 09:54:23.815498 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" event={"ID":"0d72e695-0183-4ee8-8add-5425e67f7138","Type":"ContainerStarted","Data":"ee46779ae89b4ca2573c0db3f08f40bcd1f36bd939f6b097aaa8ab0676c68690"} Mar 18 09:54:23.817355 master-0 kubenswrapper[3991]: I0318 09:54:23.817301 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" event={"ID":"ec53d7fa-445b-4e1d-84ef-545f08e80ccc","Type":"ContainerStarted","Data":"a0f6a23031d96231e99cbb9f2b16dea4d913c0ee0df84104c4f8c08579a04daa"} Mar 18 09:54:23.818616 master-0 kubenswrapper[3991]: I0318 09:54:23.818538 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" event={"ID":"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4","Type":"ContainerStarted","Data":"84a3629f241ccd15c8649ba629b3be31e2785a3b2224bbe09e95e6dbad4b5613"} Mar 18 09:54:24.824955 master-0 kubenswrapper[3991]: I0318 09:54:24.824871 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" event={"ID":"6a6a616d-012a-479e-ab3d-b21295ea1805","Type":"ContainerStarted","Data":"baecef73d93e3ca9ff934b2e1c379d4ea8c4c91e3cae11e23b740ee52145d967"} Mar 18 09:54:25.622117 master-0 kubenswrapper[3991]: I0318 09:54:25.621937 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:25.622399 master-0 kubenswrapper[3991]: E0318 09:54:25.622316 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:25.622469 master-0 kubenswrapper[3991]: I0318 09:54:25.622414 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:25.622469 master-0 kubenswrapper[3991]: E0318 09:54:25.622459 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.622421152 +0000 UTC m=+153.581361137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:25.622640 master-0 kubenswrapper[3991]: I0318 09:54:25.622586 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:25.622709 master-0 kubenswrapper[3991]: I0318 09:54:25.622645 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:25.622806 master-0 kubenswrapper[3991]: E0318 09:54:25.622690 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:25.622902 master-0 kubenswrapper[3991]: E0318 09:54:25.622812 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.62277742 +0000 UTC m=+153.581717425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:25.622902 master-0 kubenswrapper[3991]: E0318 09:54:25.622846 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:25.622902 master-0 kubenswrapper[3991]: I0318 09:54:25.622701 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:25.623106 master-0 kubenswrapper[3991]: E0318 09:54:25.622895 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:25.623106 master-0 kubenswrapper[3991]: E0318 09:54:25.622904 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.622886863 +0000 UTC m=+153.581826798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:25.623106 master-0 kubenswrapper[3991]: E0318 09:54:25.623012 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:25.623278 master-0 kubenswrapper[3991]: I0318 09:54:25.623201 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:25.623386 master-0 kubenswrapper[3991]: E0318 09:54:25.623353 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:25.623540 master-0 kubenswrapper[3991]: E0318 09:54:25.623509 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.623468297 +0000 UTC m=+153.582408232 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:25.623703 master-0 kubenswrapper[3991]: E0318 09:54:25.623682 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.623659892 +0000 UTC m=+153.582599827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:25.623871 master-0 kubenswrapper[3991]: E0318 09:54:25.623850 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.623808225 +0000 UTC m=+153.582748160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:25.723857 master-0 kubenswrapper[3991]: I0318 09:54:25.723713 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:25.724113 master-0 kubenswrapper[3991]: E0318 09:54:25.723897 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:25.724113 master-0 kubenswrapper[3991]: I0318 09:54:25.723906 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:25.724113 master-0 kubenswrapper[3991]: E0318 09:54:25.723987 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.723960164 +0000 UTC m=+153.682900089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:25.724113 master-0 kubenswrapper[3991]: E0318 09:54:25.724032 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:25.724113 master-0 kubenswrapper[3991]: I0318 09:54:25.724093 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:25.724113 master-0 kubenswrapper[3991]: E0318 09:54:25.724107 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.724082197 +0000 UTC m=+153.683022182 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:25.724530 master-0 kubenswrapper[3991]: I0318 09:54:25.724187 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:25.724530 master-0 kubenswrapper[3991]: I0318 09:54:25.724212 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:25.724530 master-0 kubenswrapper[3991]: E0318 09:54:25.724321 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:25.724530 master-0 kubenswrapper[3991]: E0318 09:54:25.724329 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:25.724530 master-0 kubenswrapper[3991]: E0318 09:54:25.724409 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:25.724530 master-0 kubenswrapper[3991]: E0318 09:54:25.724368 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.724354434 +0000 UTC m=+153.683294369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:25.724530 master-0 kubenswrapper[3991]: E0318 09:54:25.724475 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.724451346 +0000 UTC m=+153.683391281 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:25.724530 master-0 kubenswrapper[3991]: E0318 09:54:25.724508 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:29.724495837 +0000 UTC m=+153.683435772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:26.936957 master-0 kubenswrapper[3991]: I0318 09:54:26.936139 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:26.945913 master-0 kubenswrapper[3991]: I0318 09:54:26.945109 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:27.171638 master-0 kubenswrapper[3991]: I0318 09:54:27.171542 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:28.496986 master-0 kubenswrapper[3991]: I0318 09:54:28.496051 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-42l55"] Mar 18 09:54:28.509017 master-0 kubenswrapper[3991]: W0318 09:54:28.508957 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74795f5d_dcd7_4723_8931_c34b59ce3087.slice/crio-983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f WatchSource:0}: Error finding container 983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f: Status 404 returned error can't find the container with id 983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f Mar 18 09:54:28.842000 master-0 kubenswrapper[3991]: I0318 09:54:28.841946 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-42l55" event={"ID":"74795f5d-dcd7-4723-8931-c34b59ce3087","Type":"ContainerStarted","Data":"983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f"} Mar 18 09:54:29.389577 master-0 kubenswrapper[3991]: I0318 09:54:29.389293 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:29.669669 master-0 kubenswrapper[3991]: I0318 09:54:29.669404 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:29.669669 master-0 kubenswrapper[3991]: I0318 09:54:29.669642 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: I0318 09:54:29.669705 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.669641 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: I0318 09:54:29.669781 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: I0318 09:54:29.669871 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.669926 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.669881085 +0000 UTC m=+161.628821020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.669745 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.670047 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.670006418 +0000 UTC m=+161.628946353 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.670110 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.670158 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.670142032 +0000 UTC m=+161.629081977 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.670111 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.670191 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.670218 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.670206363 +0000 UTC m=+161.629146298 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.670298 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.670267195 +0000 UTC m=+161.629207130 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: I0318 09:54:29.670334 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:29.670679 master-0 kubenswrapper[3991]: E0318 09:54:29.670465 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:29.671705 master-0 kubenswrapper[3991]: E0318 09:54:29.670530 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.670513071 +0000 UTC m=+161.629452996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:29.772189 master-0 kubenswrapper[3991]: I0318 09:54:29.772066 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:29.772354 master-0 kubenswrapper[3991]: I0318 09:54:29.772283 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:29.772425 master-0 kubenswrapper[3991]: I0318 09:54:29.772393 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:29.772629 master-0 kubenswrapper[3991]: I0318 09:54:29.772570 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:29.772717 master-0 kubenswrapper[3991]: I0318 09:54:29.772652 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:29.773016 master-0 kubenswrapper[3991]: E0318 09:54:29.772962 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:29.773120 master-0 kubenswrapper[3991]: E0318 09:54:29.773088 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.773052427 +0000 UTC m=+161.731992362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:29.773255 master-0 kubenswrapper[3991]: E0318 09:54:29.773222 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:29.773320 master-0 kubenswrapper[3991]: E0318 09:54:29.773302 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.773279213 +0000 UTC m=+161.732219148 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:29.773475 master-0 kubenswrapper[3991]: E0318 09:54:29.773426 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:29.773544 master-0 kubenswrapper[3991]: E0318 09:54:29.773509 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.773483568 +0000 UTC m=+161.732423513 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:29.773662 master-0 kubenswrapper[3991]: E0318 09:54:29.773630 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:29.773728 master-0 kubenswrapper[3991]: E0318 09:54:29.773703 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.773681952 +0000 UTC m=+161.732621897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:29.773893 master-0 kubenswrapper[3991]: E0318 09:54:29.773859 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:29.773983 master-0 kubenswrapper[3991]: E0318 09:54:29.773940 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:37.773916648 +0000 UTC m=+161.732856593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:29.848190 master-0 kubenswrapper[3991]: I0318 09:54:29.848098 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-42l55" event={"ID":"74795f5d-dcd7-4723-8931-c34b59ce3087","Type":"ContainerStarted","Data":"331ba2f2e3e004446b1ad6de227ffd6c04686b85ceb7ddd9190e35710a01c39c"} Mar 18 09:54:31.855432 master-0 kubenswrapper[3991]: I0318 09:54:31.855386 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:32.965786 master-0 kubenswrapper[3991]: I0318 09:54:32.965681 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-42l55" podStartSLOduration=72.965647926 podStartE2EDuration="1m12.965647926s" podCreationTimestamp="2026-03-18 09:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:54:32.959980469 +0000 UTC m=+156.918920444" watchObservedRunningTime="2026-03-18 09:54:32.965647926 +0000 UTC m=+156.924587861" Mar 18 09:54:34.477218 master-0 kubenswrapper[3991]: I0318 09:54:34.477122 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" podStartSLOduration=113.477106405 podStartE2EDuration="1m53.477106405s" podCreationTimestamp="2026-03-18 09:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:54:34.234817647 +0000 UTC m=+158.193757582" watchObservedRunningTime="2026-03-18 09:54:34.477106405 +0000 UTC m=+158.436046300" Mar 18 09:54:37.764814 master-0 kubenswrapper[3991]: I0318 09:54:37.764722 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: I0318 09:54:37.764875 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.764999 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.765105 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.765074459 +0000 UTC m=+177.724014394 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.765133 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: I0318 09:54:37.765205 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.765229 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.765198222 +0000 UTC m=+177.724138157 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.765304 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.765347 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.765331095 +0000 UTC m=+177.724271020 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: I0318 09:54:37.765377 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: I0318 09:54:37.765422 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.765543 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.765557 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.765581 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.765568281 +0000 UTC m=+177.724508216 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: E0318 09:54:37.765631 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.765605042 +0000 UTC m=+177.724544977 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:37.765781 master-0 kubenswrapper[3991]: I0318 09:54:37.765671 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:37.767259 master-0 kubenswrapper[3991]: E0318 09:54:37.765807 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:37.767259 master-0 kubenswrapper[3991]: E0318 09:54:37.765913 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.765891618 +0000 UTC m=+177.724831543 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:37.866869 master-0 kubenswrapper[3991]: I0318 09:54:37.866804 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:37.867090 master-0 kubenswrapper[3991]: E0318 09:54:37.867044 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:37.867183 master-0 kubenswrapper[3991]: E0318 09:54:37.867164 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.867134584 +0000 UTC m=+177.826074519 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:37.867255 master-0 kubenswrapper[3991]: I0318 09:54:37.867153 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:37.867255 master-0 kubenswrapper[3991]: I0318 09:54:37.867239 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:37.867328 master-0 kubenswrapper[3991]: E0318 09:54:37.867277 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:37.867369 master-0 kubenswrapper[3991]: E0318 09:54:37.867350 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.867331889 +0000 UTC m=+177.826271874 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:37.867490 master-0 kubenswrapper[3991]: E0318 09:54:37.867440 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:37.867623 master-0 kubenswrapper[3991]: I0318 09:54:37.867457 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:37.867671 master-0 kubenswrapper[3991]: I0318 09:54:37.867642 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:37.867789 master-0 kubenswrapper[3991]: E0318 09:54:37.867595 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:37.867904 master-0 kubenswrapper[3991]: E0318 09:54:37.867766 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:37.867960 master-0 kubenswrapper[3991]: E0318 09:54:37.867897 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.867863841 +0000 UTC m=+177.826803786 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:37.867960 master-0 kubenswrapper[3991]: E0318 09:54:37.867938 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.867920503 +0000 UTC m=+177.826860438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:37.868731 master-0 kubenswrapper[3991]: E0318 09:54:37.868644 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:53.868602909 +0000 UTC m=+177.827542814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:40.899842 master-0 kubenswrapper[3991]: I0318 09:54:40.896272 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" event={"ID":"f076eaf0-b041-4db0-ba06-3d85e23bb654","Type":"ContainerStarted","Data":"86e19dd48a4220e684cd4591a7ea73d2539f388a0f50f6f6c55feee37bcbb65f"} Mar 18 09:54:40.904838 master-0 kubenswrapper[3991]: I0318 09:54:40.901675 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" event={"ID":"0d72e695-0183-4ee8-8add-5425e67f7138","Type":"ContainerStarted","Data":"756a2f4fb3414c500a82e436fbad8cd30da785b7959d7459fc20c6af350a8060"} Mar 18 09:54:40.904838 master-0 kubenswrapper[3991]: I0318 09:54:40.902669 3991 generic.go:334] "Generic (PLEG): container finished" podID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerID="981f5359f2b3c5ba98385487e0fffb3f9c331fb34bb0e106e475367f63bb51f9" exitCode=0 Mar 18 09:54:40.904838 master-0 kubenswrapper[3991]: I0318 09:54:40.903133 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerDied","Data":"981f5359f2b3c5ba98385487e0fffb3f9c331fb34bb0e106e475367f63bb51f9"} Mar 18 09:54:40.909137 master-0 kubenswrapper[3991]: I0318 09:54:40.905576 3991 generic.go:334] "Generic (PLEG): container finished" podID="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" containerID="ded65abc153650de9d5b3f05283a7442214a212644c7845fac73ca03c4499d84" exitCode=0 Mar 18 09:54:40.909137 master-0 kubenswrapper[3991]: I0318 09:54:40.905607 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" event={"ID":"d26036f1-bdce-4ec5-873f-962fa7e8e6c1","Type":"ContainerDied","Data":"ded65abc153650de9d5b3f05283a7442214a212644c7845fac73ca03c4499d84"} Mar 18 09:54:40.909137 master-0 kubenswrapper[3991]: I0318 09:54:40.906920 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" event={"ID":"bb35841e-d992-4044-aaaa-06c9faf47bd0","Type":"ContainerStarted","Data":"21ea6abc98e78a0444eb255d9f1edf6ce13e5e0f11a1d4b38c35dd0e5e280fcf"} Mar 18 09:54:40.909137 master-0 kubenswrapper[3991]: I0318 09:54:40.908639 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" podStartSLOduration=101.804147223 podStartE2EDuration="1m58.908623819s" podCreationTimestamp="2026-03-18 09:52:42 +0000 UTC" firstStartedPulling="2026-03-18 09:54:23.485746773 +0000 UTC m=+147.444686678" lastFinishedPulling="2026-03-18 09:54:40.590223359 +0000 UTC m=+164.549163274" observedRunningTime="2026-03-18 09:54:40.907560503 +0000 UTC m=+164.866500408" watchObservedRunningTime="2026-03-18 09:54:40.908623819 +0000 UTC m=+164.867563714" Mar 18 09:54:40.912843 master-0 kubenswrapper[3991]: I0318 09:54:40.909357 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" event={"ID":"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6","Type":"ContainerStarted","Data":"da02ee0de03a088a8c40f809ca8f007d6167a1c499d12f1066049752159499b0"} Mar 18 09:54:40.926848 master-0 kubenswrapper[3991]: I0318 09:54:40.923694 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" podStartSLOduration=101.327919736 podStartE2EDuration="1m58.923677711s" podCreationTimestamp="2026-03-18 09:52:42 +0000 UTC" firstStartedPulling="2026-03-18 09:54:22.889325055 +0000 UTC m=+146.848264950" lastFinishedPulling="2026-03-18 09:54:40.48508303 +0000 UTC m=+164.444022925" observedRunningTime="2026-03-18 09:54:40.922155074 +0000 UTC m=+164.881094989" watchObservedRunningTime="2026-03-18 09:54:40.923677711 +0000 UTC m=+164.882617606" Mar 18 09:54:40.929566 master-0 kubenswrapper[3991]: I0318 09:54:40.929507 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" event={"ID":"8e812dd9-cd05-4e9e-8710-d0920181ece2","Type":"ContainerStarted","Data":"0f3ba17641fd2eeb6aa8e7525f8b6f8d95a3be2ff7d2acad4eb9670c5982bbeb"} Mar 18 09:54:40.939122 master-0 kubenswrapper[3991]: I0318 09:54:40.938792 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" event={"ID":"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4","Type":"ContainerStarted","Data":"c8f91dc57ea6bc611089a31345d27ad1b6b311c14621b5aebef7b7aac62f0940"} Mar 18 09:54:40.942845 master-0 kubenswrapper[3991]: I0318 09:54:40.941437 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" event={"ID":"ec53d7fa-445b-4e1d-84ef-545f08e80ccc","Type":"ContainerStarted","Data":"5852b37c5e8c94f0baa4c4a1981174d60f6d9f69d3672da3d78ad25102d900a1"} Mar 18 09:54:40.950842 master-0 kubenswrapper[3991]: I0318 09:54:40.944915 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" podStartSLOduration=101.89931303 podStartE2EDuration="1m58.944903381s" podCreationTimestamp="2026-03-18 09:52:42 +0000 UTC" firstStartedPulling="2026-03-18 09:54:22.89038236 +0000 UTC m=+146.849322255" lastFinishedPulling="2026-03-18 09:54:39.935972711 +0000 UTC m=+163.894912606" observedRunningTime="2026-03-18 09:54:40.944535702 +0000 UTC m=+164.903475597" watchObservedRunningTime="2026-03-18 09:54:40.944903381 +0000 UTC m=+164.903843276" Mar 18 09:54:41.062844 master-0 kubenswrapper[3991]: I0318 09:54:41.059798 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" podStartSLOduration=104.807254827 podStartE2EDuration="2m2.059777275s" podCreationTimestamp="2026-03-18 09:52:39 +0000 UTC" firstStartedPulling="2026-03-18 09:54:23.231994428 +0000 UTC m=+147.190934333" lastFinishedPulling="2026-03-18 09:54:40.484516856 +0000 UTC m=+164.443456781" observedRunningTime="2026-03-18 09:54:41.040270335 +0000 UTC m=+164.999210240" watchObservedRunningTime="2026-03-18 09:54:41.059777275 +0000 UTC m=+165.018717180" Mar 18 09:54:41.062844 master-0 kubenswrapper[3991]: I0318 09:54:41.060157 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" podStartSLOduration=105.736148346 podStartE2EDuration="2m3.060151284s" podCreationTimestamp="2026-03-18 09:52:38 +0000 UTC" firstStartedPulling="2026-03-18 09:54:23.232040889 +0000 UTC m=+147.190980804" lastFinishedPulling="2026-03-18 09:54:40.556043817 +0000 UTC m=+164.514983742" observedRunningTime="2026-03-18 09:54:41.057808697 +0000 UTC m=+165.016748592" watchObservedRunningTime="2026-03-18 09:54:41.060151284 +0000 UTC m=+165.019091179" Mar 18 09:54:41.106846 master-0 kubenswrapper[3991]: I0318 09:54:41.104216 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" podStartSLOduration=103.990168957 podStartE2EDuration="2m1.104199963s" podCreationTimestamp="2026-03-18 09:52:40 +0000 UTC" firstStartedPulling="2026-03-18 09:54:23.477012123 +0000 UTC m=+147.435952058" lastFinishedPulling="2026-03-18 09:54:40.591043159 +0000 UTC m=+164.549983064" observedRunningTime="2026-03-18 09:54:41.09117306 +0000 UTC m=+165.050112955" watchObservedRunningTime="2026-03-18 09:54:41.104199963 +0000 UTC m=+165.063139848" Mar 18 09:54:41.106846 master-0 kubenswrapper[3991]: I0318 09:54:41.105552 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" podStartSLOduration=101.792053191 podStartE2EDuration="1m59.105546706s" podCreationTimestamp="2026-03-18 09:52:42 +0000 UTC" firstStartedPulling="2026-03-18 09:54:23.232190393 +0000 UTC m=+147.191130298" lastFinishedPulling="2026-03-18 09:54:40.545683918 +0000 UTC m=+164.504623813" observedRunningTime="2026-03-18 09:54:41.103133128 +0000 UTC m=+165.062073023" watchObservedRunningTime="2026-03-18 09:54:41.105546706 +0000 UTC m=+165.064486601" Mar 18 09:54:41.949351 master-0 kubenswrapper[3991]: I0318 09:54:41.949009 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" event={"ID":"0999f781-3299-4cb6-ba76-2a4f4584c685","Type":"ContainerStarted","Data":"e5c331496115ef5ceb50ea93103ae754d1d16032e25eefad5a38ee8ba0e6ac68"} Mar 18 09:54:41.951739 master-0 kubenswrapper[3991]: I0318 09:54:41.951483 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" event={"ID":"a078565a-6970-4f42-84f4-938f1d637245","Type":"ContainerStarted","Data":"035a83745bfe6ed219f87a31bd7766c9d9b162354f5f4e36d6dc8a6cc1dbc053"} Mar 18 09:54:41.993730 master-0 kubenswrapper[3991]: I0318 09:54:41.993658 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" podStartSLOduration=106.693787063 podStartE2EDuration="2m3.99363894s" podCreationTimestamp="2026-03-18 09:52:38 +0000 UTC" firstStartedPulling="2026-03-18 09:54:23.235603175 +0000 UTC m=+147.194543110" lastFinishedPulling="2026-03-18 09:54:40.535455082 +0000 UTC m=+164.494394987" observedRunningTime="2026-03-18 09:54:41.978769562 +0000 UTC m=+165.937709457" watchObservedRunningTime="2026-03-18 09:54:41.99363894 +0000 UTC m=+165.952578835" Mar 18 09:54:41.994878 master-0 kubenswrapper[3991]: I0318 09:54:41.994817 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj"] Mar 18 09:54:42.001252 master-0 kubenswrapper[3991]: I0318 09:54:42.001220 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" Mar 18 09:54:42.015851 master-0 kubenswrapper[3991]: I0318 09:54:42.012941 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hww8g\" (UniqueName: \"kubernetes.io/projected/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae-kube-api-access-hww8g\") pod \"migrator-8487694857-8tqwj\" (UID: \"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" Mar 18 09:54:42.033908 master-0 kubenswrapper[3991]: I0318 09:54:42.033348 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj"] Mar 18 09:54:42.039736 master-0 kubenswrapper[3991]: I0318 09:54:42.038608 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 09:54:42.039736 master-0 kubenswrapper[3991]: I0318 09:54:42.038844 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 09:54:42.116456 master-0 kubenswrapper[3991]: I0318 09:54:42.116419 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hww8g\" (UniqueName: \"kubernetes.io/projected/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae-kube-api-access-hww8g\") pod \"migrator-8487694857-8tqwj\" (UID: \"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" Mar 18 09:54:42.135408 master-0 kubenswrapper[3991]: I0318 09:54:42.135300 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" podStartSLOduration=105.830598814 podStartE2EDuration="2m3.135281797s" podCreationTimestamp="2026-03-18 09:52:39 +0000 UTC" firstStartedPulling="2026-03-18 09:54:23.231537727 +0000 UTC m=+147.190477662" lastFinishedPulling="2026-03-18 09:54:40.53622075 +0000 UTC m=+164.495160645" observedRunningTime="2026-03-18 09:54:42.080979361 +0000 UTC m=+166.039919256" watchObservedRunningTime="2026-03-18 09:54:42.135281797 +0000 UTC m=+166.094221692" Mar 18 09:54:42.158719 master-0 kubenswrapper[3991]: I0318 09:54:42.158654 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hww8g\" (UniqueName: \"kubernetes.io/projected/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae-kube-api-access-hww8g\") pod \"migrator-8487694857-8tqwj\" (UID: \"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" Mar 18 09:54:42.331997 master-0 kubenswrapper[3991]: I0318 09:54:42.331931 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" Mar 18 09:54:42.533377 master-0 kubenswrapper[3991]: I0318 09:54:42.532860 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq"] Mar 18 09:54:42.533554 master-0 kubenswrapper[3991]: I0318 09:54:42.533518 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" Mar 18 09:54:42.533917 master-0 kubenswrapper[3991]: I0318 09:54:42.533887 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq"] Mar 18 09:54:42.593454 master-0 kubenswrapper[3991]: I0318 09:54:42.593278 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj"] Mar 18 09:54:42.627903 master-0 kubenswrapper[3991]: I0318 09:54:42.626970 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv8x5\" (UniqueName: \"kubernetes.io/projected/932a70df-3afe-4873-9449-ab6e061d3fe3-kube-api-access-fv8x5\") pod \"csi-snapshot-controller-64854d9cff-2l6cq\" (UID: \"932a70df-3afe-4873-9449-ab6e061d3fe3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" Mar 18 09:54:42.728961 master-0 kubenswrapper[3991]: I0318 09:54:42.728422 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv8x5\" (UniqueName: \"kubernetes.io/projected/932a70df-3afe-4873-9449-ab6e061d3fe3-kube-api-access-fv8x5\") pod \"csi-snapshot-controller-64854d9cff-2l6cq\" (UID: \"932a70df-3afe-4873-9449-ab6e061d3fe3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" Mar 18 09:54:42.758641 master-0 kubenswrapper[3991]: I0318 09:54:42.750516 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv8x5\" (UniqueName: \"kubernetes.io/projected/932a70df-3afe-4873-9449-ab6e061d3fe3-kube-api-access-fv8x5\") pod \"csi-snapshot-controller-64854d9cff-2l6cq\" (UID: \"932a70df-3afe-4873-9449-ab6e061d3fe3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" Mar 18 09:54:42.869847 master-0 kubenswrapper[3991]: I0318 09:54:42.869679 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" Mar 18 09:54:42.877877 master-0 kubenswrapper[3991]: I0318 09:54:42.877718 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-zvdvg"] Mar 18 09:54:42.878277 master-0 kubenswrapper[3991]: I0318 09:54:42.878241 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:42.882047 master-0 kubenswrapper[3991]: I0318 09:54:42.880523 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:54:42.882047 master-0 kubenswrapper[3991]: I0318 09:54:42.880853 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:54:42.882047 master-0 kubenswrapper[3991]: I0318 09:54:42.880997 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:54:42.882047 master-0 kubenswrapper[3991]: I0318 09:54:42.881068 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:54:42.882047 master-0 kubenswrapper[3991]: I0318 09:54:42.881257 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:54:42.882453 master-0 kubenswrapper[3991]: I0318 09:54:42.880967 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:54:42.885716 master-0 kubenswrapper[3991]: I0318 09:54:42.885655 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-zvdvg"] Mar 18 09:54:42.931372 master-0 kubenswrapper[3991]: I0318 09:54:42.931306 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l4b6\" (UniqueName: \"kubernetes.io/projected/025ade16-8502-4b71-a4be-f13dee081e3a-kube-api-access-8l4b6\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:42.931612 master-0 kubenswrapper[3991]: I0318 09:54:42.931570 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-client-ca\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:42.931664 master-0 kubenswrapper[3991]: I0318 09:54:42.931614 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:42.931664 master-0 kubenswrapper[3991]: I0318 09:54:42.931645 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/025ade16-8502-4b71-a4be-f13dee081e3a-serving-cert\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:42.931720 master-0 kubenswrapper[3991]: I0318 09:54:42.931662 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-config\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:42.955948 master-0 kubenswrapper[3991]: I0318 09:54:42.955754 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" event={"ID":"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae","Type":"ContainerStarted","Data":"c669ea9b66a51273cf2d30ced0d0c7e6bfc9166bf41cddcbf86ac434cad57ea6"} Mar 18 09:54:43.033534 master-0 kubenswrapper[3991]: I0318 09:54:43.033336 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-client-ca\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:43.033534 master-0 kubenswrapper[3991]: I0318 09:54:43.033521 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:43.033752 master-0 kubenswrapper[3991]: E0318 09:54:43.033556 3991 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 09:54:43.033752 master-0 kubenswrapper[3991]: I0318 09:54:43.033590 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/025ade16-8502-4b71-a4be-f13dee081e3a-serving-cert\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:43.033752 master-0 kubenswrapper[3991]: E0318 09:54:43.033600 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-proxy-ca-bundles podName:025ade16-8502-4b71-a4be-f13dee081e3a nodeName:}" failed. No retries permitted until 2026-03-18 09:54:43.533584225 +0000 UTC m=+167.492524120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-proxy-ca-bundles") pod "controller-manager-f5df8899c-zvdvg" (UID: "025ade16-8502-4b71-a4be-f13dee081e3a") : configmap "openshift-global-ca" not found Mar 18 09:54:43.033752 master-0 kubenswrapper[3991]: E0318 09:54:43.033660 3991 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:43.033752 master-0 kubenswrapper[3991]: E0318 09:54:43.033732 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-client-ca podName:025ade16-8502-4b71-a4be-f13dee081e3a nodeName:}" failed. No retries permitted until 2026-03-18 09:54:43.533716038 +0000 UTC m=+167.492655933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-client-ca") pod "controller-manager-f5df8899c-zvdvg" (UID: "025ade16-8502-4b71-a4be-f13dee081e3a") : configmap "client-ca" not found Mar 18 09:54:43.033752 master-0 kubenswrapper[3991]: E0318 09:54:43.033677 3991 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:54:43.033752 master-0 kubenswrapper[3991]: E0318 09:54:43.033756 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/025ade16-8502-4b71-a4be-f13dee081e3a-serving-cert podName:025ade16-8502-4b71-a4be-f13dee081e3a nodeName:}" failed. No retries permitted until 2026-03-18 09:54:43.533750749 +0000 UTC m=+167.492690644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/025ade16-8502-4b71-a4be-f13dee081e3a-serving-cert") pod "controller-manager-f5df8899c-zvdvg" (UID: "025ade16-8502-4b71-a4be-f13dee081e3a") : secret "serving-cert" not found Mar 18 09:54:43.034169 master-0 kubenswrapper[3991]: I0318 09:54:43.033777 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-config\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:43.034169 master-0 kubenswrapper[3991]: E0318 09:54:43.033855 3991 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 18 09:54:43.034169 master-0 kubenswrapper[3991]: E0318 09:54:43.033882 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-config podName:025ade16-8502-4b71-a4be-f13dee081e3a nodeName:}" failed. No retries permitted until 2026-03-18 09:54:43.533873572 +0000 UTC m=+167.492813467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/025ade16-8502-4b71-a4be-f13dee081e3a-config") pod "controller-manager-f5df8899c-zvdvg" (UID: "025ade16-8502-4b71-a4be-f13dee081e3a") : configmap "config" not found Mar 18 09:54:43.034169 master-0 kubenswrapper[3991]: I0318 09:54:43.033920 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l4b6\" (UniqueName: \"kubernetes.io/projected/025ade16-8502-4b71-a4be-f13dee081e3a-kube-api-access-8l4b6\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:43.053978 master-0 kubenswrapper[3991]: I0318 09:54:43.053945 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l4b6\" (UniqueName: \"kubernetes.io/projected/025ade16-8502-4b71-a4be-f13dee081e3a-kube-api-access-8l4b6\") pod \"controller-manager-f5df8899c-zvdvg\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:43.061236 master-0 kubenswrapper[3991]: I0318 09:54:43.061202 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq"] Mar 18 09:54:43.066968 master-0 kubenswrapper[3991]: W0318 09:54:43.066920 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod932a70df_3afe_4873_9449_ab6e061d3fe3.slice/crio-dd0e307b59dcdef36339f9469bcea9ae60dc835b43a1e8b7190883e66520e662 WatchSource:0}: Error finding container dd0e307b59dcdef36339f9469bcea9ae60dc835b43a1e8b7190883e66520e662: Status 404 returned error can't find the container with id dd0e307b59dcdef36339f9469bcea9ae60dc835b43a1e8b7190883e66520e662 Mar 18 09:54:43.227294 master-0 kubenswrapper[3991]: E0318 09:54:43.227161 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" podUID="15f8941b-dba2-40ba-86d5-3318f5b635cc" Mar 18 09:54:43.407903 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 09:54:43.441489 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 09:54:43.441738 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 09:54:43.443105 master-0 systemd[1]: kubelet.service: Consumed 12.145s CPU time. Mar 18 09:54:43.452842 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 09:54:43.586765 master-0 kubenswrapper[8244]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:54:43.586765 master-0 kubenswrapper[8244]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 09:54:43.586765 master-0 kubenswrapper[8244]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:54:43.586765 master-0 kubenswrapper[8244]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:54:43.586765 master-0 kubenswrapper[8244]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 09:54:43.586765 master-0 kubenswrapper[8244]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:54:43.588028 master-0 kubenswrapper[8244]: I0318 09:54:43.586878 8244 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 09:54:43.591985 master-0 kubenswrapper[8244]: W0318 09:54:43.591951 8244 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:54:43.591985 master-0 kubenswrapper[8244]: W0318 09:54:43.591977 8244 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:54:43.591985 master-0 kubenswrapper[8244]: W0318 09:54:43.591983 8244 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:54:43.591985 master-0 kubenswrapper[8244]: W0318 09:54:43.591989 8244 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:54:43.591985 master-0 kubenswrapper[8244]: W0318 09:54:43.591994 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.591999 8244 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592573 8244 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592612 8244 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592620 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592627 8244 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592634 8244 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592641 8244 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592707 8244 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592716 8244 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592722 8244 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592729 8244 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592735 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592741 8244 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592746 8244 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592776 8244 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592782 8244 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592790 8244 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592797 8244 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592806 8244 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:54:43.594756 master-0 kubenswrapper[8244]: W0318 09:54:43.592814 8244 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592866 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592877 8244 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592885 8244 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592892 8244 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592897 8244 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592904 8244 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592940 8244 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592949 8244 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592956 8244 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592963 8244 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592969 8244 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592975 8244 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592985 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.592992 8244 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.593026 8244 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.593037 8244 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.593043 8244 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.593051 8244 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:54:43.598767 master-0 kubenswrapper[8244]: W0318 09:54:43.593057 8244 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593065 8244 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593076 8244 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593120 8244 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593128 8244 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593139 8244 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593145 8244 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593151 8244 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593157 8244 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593191 8244 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593198 8244 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593203 8244 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593214 8244 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593222 8244 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593229 8244 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593235 8244 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593304 8244 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593312 8244 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593318 8244 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593324 8244 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:54:43.599513 master-0 kubenswrapper[8244]: W0318 09:54:43.593329 8244 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: W0318 09:54:43.593335 8244 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: W0318 09:54:43.593340 8244 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: W0318 09:54:43.593346 8244 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: W0318 09:54:43.593352 8244 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: W0318 09:54:43.593357 8244 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: W0318 09:54:43.593362 8244 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: W0318 09:54:43.593368 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: W0318 09:54:43.593374 8244 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593576 8244 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593591 8244 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593604 8244 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593613 8244 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593636 8244 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593643 8244 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593652 8244 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593663 8244 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593671 8244 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593680 8244 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593690 8244 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593699 8244 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593710 8244 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 09:54:43.600454 master-0 kubenswrapper[8244]: I0318 09:54:43.593717 8244 flags.go:64] FLAG: --cgroup-root="" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.593724 8244 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.593875 8244 flags.go:64] FLAG: --client-ca-file="" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.593892 8244 flags.go:64] FLAG: --cloud-config="" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.593900 8244 flags.go:64] FLAG: --cloud-provider="" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.593908 8244 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.593970 8244 flags.go:64] FLAG: --cluster-domain="" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.593979 8244 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.593994 8244 flags.go:64] FLAG: --config-dir="" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594014 8244 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594023 8244 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594035 8244 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594043 8244 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594052 8244 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594061 8244 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594069 8244 flags.go:64] FLAG: --contention-profiling="false" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594079 8244 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594085 8244 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594092 8244 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594099 8244 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594108 8244 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594114 8244 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594121 8244 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594127 8244 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594133 8244 flags.go:64] FLAG: --enable-server="true" Mar 18 09:54:43.601286 master-0 kubenswrapper[8244]: I0318 09:54:43.594155 8244 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594167 8244 flags.go:64] FLAG: --event-burst="100" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594176 8244 flags.go:64] FLAG: --event-qps="50" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594183 8244 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594191 8244 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594200 8244 flags.go:64] FLAG: --eviction-hard="" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594211 8244 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594219 8244 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594232 8244 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594242 8244 flags.go:64] FLAG: --eviction-soft="" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594251 8244 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594260 8244 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594268 8244 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594277 8244 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594285 8244 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594292 8244 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594302 8244 flags.go:64] FLAG: --feature-gates="" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594315 8244 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594322 8244 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594329 8244 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594337 8244 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594344 8244 flags.go:64] FLAG: --healthz-port="10248" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594352 8244 flags.go:64] FLAG: --help="false" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594359 8244 flags.go:64] FLAG: --hostname-override="" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594365 8244 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594375 8244 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 09:54:43.603904 master-0 kubenswrapper[8244]: I0318 09:54:43.594382 8244 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595221 8244 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595244 8244 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595253 8244 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595259 8244 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595266 8244 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595284 8244 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595292 8244 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595304 8244 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595312 8244 flags.go:64] FLAG: --kube-reserved="" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595321 8244 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595328 8244 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595337 8244 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595344 8244 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595353 8244 flags.go:64] FLAG: --lock-file="" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595360 8244 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595373 8244 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595382 8244 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595575 8244 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595583 8244 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595590 8244 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595596 8244 flags.go:64] FLAG: --logging-format="text" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595613 8244 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595856 8244 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595867 8244 flags.go:64] FLAG: --manifest-url="" Mar 18 09:54:43.604988 master-0 kubenswrapper[8244]: I0318 09:54:43.595895 8244 flags.go:64] FLAG: --manifest-url-header="" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.595909 8244 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.595918 8244 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.595928 8244 flags.go:64] FLAG: --max-pods="110" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.595936 8244 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.595948 8244 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.595956 8244 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.595965 8244 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.595991 8244 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.595999 8244 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596007 8244 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596032 8244 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596040 8244 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596049 8244 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596064 8244 flags.go:64] FLAG: --pod-cidr="" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596072 8244 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596088 8244 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596096 8244 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596108 8244 flags.go:64] FLAG: --pods-per-core="0" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596115 8244 flags.go:64] FLAG: --port="10250" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596121 8244 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596128 8244 flags.go:64] FLAG: --provider-id="" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596135 8244 flags.go:64] FLAG: --qos-reserved="" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596145 8244 flags.go:64] FLAG: --read-only-port="10255" Mar 18 09:54:43.606198 master-0 kubenswrapper[8244]: I0318 09:54:43.596151 8244 flags.go:64] FLAG: --register-node="true" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596158 8244 flags.go:64] FLAG: --register-schedulable="true" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596164 8244 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596178 8244 flags.go:64] FLAG: --registry-burst="10" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596185 8244 flags.go:64] FLAG: --registry-qps="5" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596193 8244 flags.go:64] FLAG: --reserved-cpus="" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596206 8244 flags.go:64] FLAG: --reserved-memory="" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596217 8244 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596226 8244 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596234 8244 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596242 8244 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596251 8244 flags.go:64] FLAG: --runonce="false" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596259 8244 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596271 8244 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596279 8244 flags.go:64] FLAG: --seccomp-default="false" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596292 8244 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596300 8244 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596309 8244 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596316 8244 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596322 8244 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596329 8244 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596335 8244 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596342 8244 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596351 8244 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596358 8244 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 09:54:43.607192 master-0 kubenswrapper[8244]: I0318 09:54:43.596365 8244 flags.go:64] FLAG: --system-cgroups="" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596374 8244 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596386 8244 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596392 8244 flags.go:64] FLAG: --tls-cert-file="" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596398 8244 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596410 8244 flags.go:64] FLAG: --tls-min-version="" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596418 8244 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596425 8244 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596432 8244 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596438 8244 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596444 8244 flags.go:64] FLAG: --v="2" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596454 8244 flags.go:64] FLAG: --version="false" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596462 8244 flags.go:64] FLAG: --vmodule="" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596474 8244 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: I0318 09:54:43.596481 8244 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: W0318 09:54:43.596781 8244 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: W0318 09:54:43.596791 8244 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: W0318 09:54:43.596798 8244 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: W0318 09:54:43.596805 8244 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: W0318 09:54:43.596812 8244 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: W0318 09:54:43.596817 8244 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: W0318 09:54:43.596847 8244 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: W0318 09:54:43.596852 8244 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:54:43.608211 master-0 kubenswrapper[8244]: W0318 09:54:43.596859 8244 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596864 8244 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596870 8244 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596875 8244 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596884 8244 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596892 8244 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596897 8244 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596903 8244 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596908 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596913 8244 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596923 8244 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596928 8244 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596934 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596939 8244 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596944 8244 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596952 8244 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596957 8244 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596966 8244 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596972 8244 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:54:43.609296 master-0 kubenswrapper[8244]: W0318 09:54:43.596977 8244 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.596983 8244 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.596988 8244 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.596994 8244 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.596999 8244 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597005 8244 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597010 8244 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597016 8244 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597021 8244 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597026 8244 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597035 8244 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597040 8244 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597046 8244 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597051 8244 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597058 8244 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597064 8244 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597071 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597077 8244 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597083 8244 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597089 8244 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:54:43.610214 master-0 kubenswrapper[8244]: W0318 09:54:43.597094 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597100 8244 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597112 8244 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597124 8244 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597130 8244 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597135 8244 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597142 8244 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597149 8244 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597157 8244 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597163 8244 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597169 8244 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597176 8244 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597183 8244 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597189 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597195 8244 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597204 8244 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597209 8244 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597214 8244 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597220 8244 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:54:43.611080 master-0 kubenswrapper[8244]: W0318 09:54:43.597225 8244 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.597231 8244 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.597236 8244 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.597241 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.597247 8244 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.597252 8244 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: I0318 09:54:43.597271 8244 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: I0318 09:54:43.608156 8244 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: I0318 09:54:43.608252 8244 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.608497 8244 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.608510 8244 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.608515 8244 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.608521 8244 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.608527 8244 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.608533 8244 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:54:43.611856 master-0 kubenswrapper[8244]: W0318 09:54:43.608539 8244 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608583 8244 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608590 8244 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608600 8244 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608607 8244 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608616 8244 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608621 8244 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608626 8244 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608631 8244 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608637 8244 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608641 8244 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608646 8244 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608651 8244 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608656 8244 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608661 8244 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608672 8244 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608715 8244 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608721 8244 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608726 8244 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:54:43.612559 master-0 kubenswrapper[8244]: W0318 09:54:43.608733 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608739 8244 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608753 8244 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608760 8244 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608801 8244 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608807 8244 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608814 8244 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608862 8244 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608901 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608926 8244 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608933 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608937 8244 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608942 8244 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608947 8244 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608953 8244 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608958 8244 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608962 8244 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.608967 8244 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.609011 8244 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.609019 8244 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:54:43.613324 master-0 kubenswrapper[8244]: W0318 09:54:43.609024 8244 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609035 8244 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609041 8244 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609046 8244 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609050 8244 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609055 8244 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609060 8244 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609065 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609070 8244 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609124 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609130 8244 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609135 8244 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609140 8244 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609145 8244 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609168 8244 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609177 8244 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609185 8244 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609200 8244 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609205 8244 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609248 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:54:43.614336 master-0 kubenswrapper[8244]: W0318 09:54:43.609254 8244 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609259 8244 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609266 8244 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609271 8244 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609276 8244 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609287 8244 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609297 8244 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: I0318 09:54:43.609306 8244 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609801 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609846 8244 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609854 8244 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609859 8244 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609866 8244 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609871 8244 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609876 8244 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:54:43.615269 master-0 kubenswrapper[8244]: W0318 09:54:43.609882 8244 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609922 8244 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609931 8244 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609940 8244 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609947 8244 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609953 8244 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609963 8244 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609969 8244 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609974 8244 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609980 8244 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609986 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609991 8244 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.609997 8244 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.610003 8244 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.610009 8244 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.610014 8244 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.610019 8244 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.610024 8244 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.610029 8244 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:54:43.615818 master-0 kubenswrapper[8244]: W0318 09:54:43.610039 8244 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610079 8244 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610085 8244 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610091 8244 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610096 8244 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610101 8244 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610106 8244 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610121 8244 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610127 8244 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610133 8244 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610138 8244 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610146 8244 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610190 8244 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610197 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610203 8244 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610208 8244 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610213 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610219 8244 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610225 8244 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610231 8244 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:54:43.616540 master-0 kubenswrapper[8244]: W0318 09:54:43.610238 8244 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610244 8244 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610250 8244 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610255 8244 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610261 8244 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610271 8244 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610276 8244 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610281 8244 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610286 8244 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610291 8244 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610338 8244 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610361 8244 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610367 8244 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610375 8244 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610381 8244 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610389 8244 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610394 8244 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610405 8244 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610412 8244 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610417 8244 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:54:43.617272 master-0 kubenswrapper[8244]: W0318 09:54:43.610423 8244 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:54:43.618040 master-0 kubenswrapper[8244]: W0318 09:54:43.610429 8244 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:54:43.618040 master-0 kubenswrapper[8244]: W0318 09:54:43.610434 8244 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:54:43.618040 master-0 kubenswrapper[8244]: W0318 09:54:43.610449 8244 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:54:43.618040 master-0 kubenswrapper[8244]: W0318 09:54:43.610454 8244 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:54:43.618040 master-0 kubenswrapper[8244]: W0318 09:54:43.610459 8244 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:54:43.618040 master-0 kubenswrapper[8244]: I0318 09:54:43.610467 8244 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:54:43.618040 master-0 kubenswrapper[8244]: I0318 09:54:43.612839 8244 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 09:54:43.618040 master-0 kubenswrapper[8244]: I0318 09:54:43.617971 8244 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 09:54:43.618359 master-0 kubenswrapper[8244]: I0318 09:54:43.618169 8244 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 09:54:43.618604 master-0 kubenswrapper[8244]: I0318 09:54:43.618578 8244 server.go:997] "Starting client certificate rotation" Mar 18 09:54:43.618604 master-0 kubenswrapper[8244]: I0318 09:54:43.618601 8244 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 09:54:43.618867 master-0 kubenswrapper[8244]: I0318 09:54:43.618763 8244 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 09:43:17 +0000 UTC, rotation deadline is 2026-03-19 04:58:51.934012807 +0000 UTC Mar 18 09:54:43.618945 master-0 kubenswrapper[8244]: I0318 09:54:43.618910 8244 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h4m8.315149632s for next certificate rotation Mar 18 09:54:43.620207 master-0 kubenswrapper[8244]: I0318 09:54:43.620170 8244 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 09:54:43.623540 master-0 kubenswrapper[8244]: I0318 09:54:43.623491 8244 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 09:54:43.627661 master-0 kubenswrapper[8244]: I0318 09:54:43.627618 8244 log.go:25] "Validated CRI v1 runtime API" Mar 18 09:54:43.631054 master-0 kubenswrapper[8244]: I0318 09:54:43.631014 8244 log.go:25] "Validated CRI v1 image API" Mar 18 09:54:43.632635 master-0 kubenswrapper[8244]: I0318 09:54:43.632584 8244 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 09:54:43.638754 master-0 kubenswrapper[8244]: I0318 09:54:43.638666 8244 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 b6f69005-7b27-4e50-b235-73833be75bbb:/dev/vda3] Mar 18 09:54:43.639664 master-0 kubenswrapper[8244]: I0318 09:54:43.638744 8244 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/02d02240944e9230fa342b4b1030eceabc9b6ad789e1383eef1d657905cf15af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/02d02240944e9230fa342b4b1030eceabc9b6ad789e1383eef1d657905cf15af/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0d84a97391b20bbc1473efdc91b70735c4232a35d2754651bb0243ebf80ab3be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0d84a97391b20bbc1473efdc91b70735c4232a35d2754651bb0243ebf80ab3be/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2cf1bdb8eb09b95692725959e60306272582dc358e1d2a541fe6b5b5e57971c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2cf1bdb8eb09b95692725959e60306272582dc358e1d2a541fe6b5b5e57971c0/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3fdec4aed0d4d1e92fcea54e18530bddc4ceb0a577b38a5b2728e046e7e0d8a1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3fdec4aed0d4d1e92fcea54e18530bddc4ceb0a577b38a5b2728e046e7e0d8a1/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/401219e24c2bd7d9e48328027e1c78136e8f25304b76126b40b8362b04997723/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/401219e24c2bd7d9e48328027e1c78136e8f25304b76126b40b8362b04997723/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/543fb2147aca575376ed7bd211cfca3f8a0e31f62df5e58bf47f4f7fc11fc303/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/543fb2147aca575376ed7bd211cfca3f8a0e31f62df5e58bf47f4f7fc11fc303/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ad9370766ae18aa384f3b2f07e9d3cada2bbe156f6bcba4f02016b49f4e713f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ad9370766ae18aa384f3b2f07e9d3cada2bbe156f6bcba4f02016b49f4e713f/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/613533c3a19224e9e30dba35639ecd39810b8db2f7864917803baa176a7bbed0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/613533c3a19224e9e30dba35639ecd39810b8db2f7864917803baa176a7bbed0/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84a3629f241ccd15c8649ba629b3be31e2785a3b2224bbe09e95e6dbad4b5613/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84a3629f241ccd15c8649ba629b3be31e2785a3b2224bbe09e95e6dbad4b5613/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f11956d88039b0b64ae7a326d73a1a29f38de2a62777ca3d744161f04878819/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f11956d88039b0b64ae7a326d73a1a29f38de2a62777ca3d744161f04878819/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a0f6a23031d96231e99cbb9f2b16dea4d913c0ee0df84104c4f8c08579a04daa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a0f6a23031d96231e99cbb9f2b16dea4d913c0ee0df84104c4f8c08579a04daa/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b58497ff3c8993b13d6f045f9b3aa17b9b5e464305fd642acb69bc40d01db14a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b58497ff3c8993b13d6f045f9b3aa17b9b5e464305fd642acb69bc40d01db14a/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c669ea9b66a51273cf2d30ced0d0c7e6bfc9166bf41cddcbf86ac434cad57ea6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c669ea9b66a51273cf2d30ced0d0c7e6bfc9166bf41cddcbf86ac434cad57ea6/userdata/shm major:0 minor:379 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cd2ad4fe81a1a347f10f858030eebc98abfffaf65eba926cffe2c8990ddb0614/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cd2ad4fe81a1a347f10f858030eebc98abfffaf65eba926cffe2c8990ddb0614/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf5a74b27454f5e8b1c18f8ef6d030c5b30a033cbc5baf882408ad3e065176ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf5a74b27454f5e8b1c18f8ef6d030c5b30a033cbc5baf882408ad3e065176ae/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa/userdata/shm major:0 minor:137 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d9a9cd3f2878ec84a255f5f74dc3526f3a1623550d44547c9ce47a07a51bb959/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d9a9cd3f2878ec84a255f5f74dc3526f3a1623550d44547c9ce47a07a51bb959/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd0e307b59dcdef36339f9469bcea9ae60dc835b43a1e8b7190883e66520e662/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd0e307b59dcdef36339f9469bcea9ae60dc835b43a1e8b7190883e66520e662/userdata/shm major:0 minor:384 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dfd0e7e42052e04911701599adae500aa7e091be93bca4bd99512045dd966402/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dfd0e7e42052e04911701599adae500aa7e091be93bca4bd99512045dd966402/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ee46779ae89b4ca2573c0db3f08f40bcd1f36bd939f6b097aaa8ab0676c68690/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ee46779ae89b4ca2573c0db3f08f40bcd1f36bd939f6b097aaa8ab0676c68690/userdata/shm major:0 minor:250 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/025ade16-8502-4b71-a4be-f13dee081e3a/volumes/kubernetes.io~projected/kube-api-access-8l4b6:{mountpoint:/var/lib/kubelet/pods/025ade16-8502-4b71-a4be-f13dee081e3a/volumes/kubernetes.io~projected/kube-api-access-8l4b6 major:0 minor:386 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03de1ea6-da57-4e13-8e5a-d5e10a9f9957/volumes/kubernetes.io~projected/kube-api-access-hcj8f:{mountpoint:/var/lib/kubelet/pods/03de1ea6-da57-4e13-8e5a-d5e10a9f9957/volumes/kubernetes.io~projected/kube-api-access-hcj8f major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0442ec6c-5973-40a5-a0c3-dc02de46d343/volumes/kubernetes.io~projected/kube-api-access-5x6ht:{mountpoint:/var/lib/kubelet/pods/0442ec6c-5973-40a5-a0c3-dc02de46d343/volumes/kubernetes.io~projected/kube-api-access-5x6ht major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~projected/kube-api-access major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~projected/kube-api-access-g6bvr:{mountpoint:/var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~projected/kube-api-access-g6bvr major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~projected/kube-api-access-9fjk8:{mountpoint:/var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~projected/kube-api-access-9fjk8 major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15f8941b-dba2-40ba-86d5-3318f5b635cc/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/15f8941b-dba2-40ba-86d5-3318f5b635cc/volumes/kubernetes.io~projected/kube-api-access major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~projected/kube-api-access-p5dk8:{mountpoint:/var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~projected/kube-api-access-p5dk8 major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~secret/serving-cert major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62b82d72-d73c-451a-84e1-551d73036aa8/volumes/kubernetes.io~projected/kube-api-access-lvnrf:{mountpoint:/var/lib/kubelet/pods/62b82d72-d73c-451a-84e1-551d73036aa8/volumes/kubernetes.io~projected/kube-api-access-lvnrf major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~projected/kube-api-access major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~secret/serving-cert major:0 minor:210 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f266bad-8b30-4300-ad93-9d48e61f2440/volumes/kubernetes.io~projected/kube-api-access-shbrj:{mountpoint:/var/lib/kubelet/pods/6f266bad-8b30-4300-ad93-9d48e61f2440/volumes/kubernetes.io~projected/kube-api-access-shbrj major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74795f5d-dcd7-4723-8931-c34b59ce3087/volumes/kubernetes.io~projected/kube-api-access-8rzsk:{mountpoint:/var/lib/kubelet/pods/74795f5d-dcd7-4723-8931-c34b59ce3087/volumes/kubernetes.io~projected/kube-api-access-8rzsk major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae/volumes/kubernetes.io~projected/kube-api-access-hww8g:{mountpoint:/var/lib/kubelet/pods/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae/volumes/kubernetes.io~projected/kube-api-access-hww8g major:0 minor:378 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8cb5158f-2199-42c0-995a-8490c9ec8a95/volumes/kubernetes.io~projected/kube-api-access-p2chb:{mountpoint:/var/lib/kubelet/pods/8cb5158f-2199-42c0-995a-8490c9ec8a95/volumes/kubernetes.io~projected/kube-api-access-p2chb major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e812dd9-cd05-4e9e-8710-d0920181ece2/volumes/kubernetes.io~projected/kube-api-access-s54f9:{mountpoint:/var/lib/kubelet/pods/8e812dd9-cd05-4e9e-8710-d0920181ece2/volumes/kubernetes.io~projected/kube-api-access-s54f9 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/kube-api-access-tb7tz:{mountpoint:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/kube-api-access-tb7tz major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91331360-dc70-45bb-a815-e00664bae6c4/volumes/kubernetes.io~projected/kube-api-access-8w8sl:{mountpoint:/var/lib/kubelet/pods/91331360-dc70-45bb-a815-e00664bae6c4/volumes/kubernetes.io~projected/kube-api-access-8w8sl major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/932a70df-3afe-4873-9449-ab6e061d3fe3/volumes/kubernetes.io~projected/kube-api-access-fv8x5:{mountpoint:/var/lib/kubelet/pods/932a70df-3afe-4873-9449-ab6e061d3fe3/volumes/kubernetes.io~projected/kube-api-access-fv8x5 major:0 minor:383 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~projected/kube-api-access-ghd2r:{mountpoint:/var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~projected/kube-api-access-ghd2r major:0 minor:92 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~secret/metrics-tls major:0 minor:85 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~projected/kube-api-access-gmffc:{mountpoint:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~projected/kube-api-access-gmffc major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~projected/kube-api-access-cxv6v:{mountpoint:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~projected/kube-api-access-cxv6v major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/etcd-client major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/kube-api-access-nwfph:{mountpoint:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/kube-api-access-nwfph major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~projected/kube-api-access-zlxfz:{mountpoint:/var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~projected/kube-api-access-zlxfz major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~projected/kube-api-access-2ktpl:{mountpoint:/var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~projected/kube-api-access-2ktpl major:0 minor:147 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~secret/webhook-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~projected/kube-api-access-p4hfd:{mountpoint:/var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~projected/kube-api-access-p4hfd major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4/volumes/kubernetes.io~projected/kube-api-access-dkzq9:{mountpoint:/var/lib/kubelet/pods/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4/volumes/kubernetes.io~projected/kube-api-access-dkzq9 major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~projected/kube-api-access-cxj5c:{mountpoint:/var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~projected/kube-api-access-cxj5c major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~projected/kube-api-access-lhzg4:{mountpoint:/var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~projected/kube-api-access-lhzg4 major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4d2218c-f9df-4d43-8727-ed3a920e23f7/volumes/kubernetes.io~projected/kube-api-access-w4qp9:{mountpoint:/var/lib/kubelet/pods/d4d2218c-f9df-4d43-8727-ed3a920e23f7/volumes/kubernetes.io~projected/kube-api-access-w4qp9 major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db52ca42-e458-407f-9eeb-bf6de6405edc/volumes/kubernetes.io~projected/kube-api-access-jx9p2:{mountpoint:/var/lib/kubelet/pods/db52ca42-e458-407f-9eeb-bf6de6405edc/volumes/kubernetes.io~projected/kube-api-access-jx9p2 major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~projected/kube-api-access-wj9sq:{mountpoint:/var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~projected/kube-api-access-wj9sq major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee376320-9ca0-444d-ab37-9cbcb6729b11/volumes/kubernetes.io~projected/kube-api-access-25k9g:{mountpoint:/var/lib/kubelet/pods/ee376320-9ca0-444d-ab37-9cbcb6729b11/volumes/kubernetes.io~projected/kube-api-access-25k9g major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~projected/kube-api-access-f25pg:{mountpoint:/var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~projected/kube-api-access-f25pg major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f69a00b6-d908-4485-bb0d-57594fc01d24/volumes/kubernetes.io~projected/kube-api-access-5r7qd:{mountpoint:/var/lib/kubelet/pods/f69a00b6-d908-4485-bb0d-57594fc01d24/volumes/kubernetes.io~projected/kube-api-access-5r7qd major:0 minor:244 fsType:tmpfs blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/ff10701b53dea463c4824d679f82a5633a3a3662483486b7662ccfbc786c2e6f/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/148657cc95fa478a2bd801a392f5217143be02fa5653ee8774da652779481d2b/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/2494a56ff26d3d38bf90b00181ae88f3f1607b50dc34612c2fb1779d1953c8fa/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/fdb47f4b9d5df1e8c3330bccb5acd47e483e24f87905c4bff7b379cbd5d3ff04/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/4bc48671c974ede535f26e3a419fb9349af4289622851a7023cb29880dd10c2f/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-130:{mountpoint:/var/lib/containers/storage/overlay/0fa8acbcd4c109a3a88bae9a31a51530836aba4a70311fc70c59d120045c3fce/merged major:0 minor:130 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/e2c237d8aed33ac38d3c45ef5ae079d952eba20eea8d3a5093e07f997f92acb8/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/d9039efa2afa273f1fbc0bc8bc890e1b7fa5f4e326660e96287b45bc5f2da142/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-139:{mountpoint:/var/lib/containers/storage/overlay/4efea80271bc0728f3f17a65ec711ca974a99f2c0c14d5295468915b5c717a97/merged major:0 minor:139 fsType:overlay blockSize:0} overlay_0-141:{mountpoint:/var/lib/containers/storage/overlay/203e77a7b17403fca9fdb098ca21cf827a03ebfb6c362ae3f28a7c3c88f12f76/merged major:0 minor:141 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/9b489f6c061bc46aa61bff12ca97aebbb9a02f49dd72d4b847c3f6dc7a4ac084/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/c21e9a8db214fd8ef2c468ded71f8c42703e0f08cfc3f6df1bbe8dec5ff4713c/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/0eb5e9de86866a0464fe042514a8da2a6da8e8ba87cf8d0ba1c6014fc3f1dd35/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/6d5b3297e5ada99accdc4e1892f8693a309e81334d6b6caeb0200735a8a40b9c/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/8d446de756bcef9f8a5a01783d2f6089ab475d97bf3e74fd17833204ea034477/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/b96213d3ceb81fa6960c350230c24cc2b358f3ee790d34de820d74a7524c528b/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/377a8de062c0d25e4f8958f39df6266e0b414476ffb2a252382a7ba465a3d6ac/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/093430c757965110c5b283d8971c8fddf482d3b6b0afc4c844d55c840195af10/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/4e44f9c338319ce2e784616ab696c22924a1d0c6f1058cb8880792b0575e1385/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/e2c6f1dac9da5e4e145ea2bb5c601de1a380722ddfb1b2452b5b1bf11ceb8912/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/d08ca1767c03e84dda4fac9547dc09605f540d1877be3a3094288714e045dfb8/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/a3cf945a9ff0c485545bb6ee95c46759a21967c7986331c1d9a4e243e58fbcaa/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-261:{mountpoint:/var/lib/containers/storage/overlay/3434a780a2bb82029b9bd63bce5b9b1734abe24605a918bae57731ae2bb4d3d8/merged major:0 minor:261 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/69d771125edd4129e8f9be89ff23ae5fd6b9cd660e534f2a146027478ee2bcdf/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/301958ab817a6c5e6190cee7f5a5c46895a49398cff0c83c7cf479a46ed8da10/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/6fb40569e83c0e07966bad2311bced539e8a255ccb224ff61d850b63fbd9858b/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/abce63d8e805d33f68144e192704f4b7fcd37fd0c2d249cba811d26bc97446bf/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/f7e4d26dff6ff3f65902128e2a98e450cb4b2123a159e59ee8faf61606fc7336/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/f69102aa53ee5cbd6d4d5785743c9e7dc85277805dc072e812ba813a8c577033/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/5718e1f0c769597cd32253f880edc05f2943a6aede5e5ed60d24b23442d6bbe4/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/2a8a6ac1832281932c965ea9d6d8324c09bd365f0266f19fc890014c6320d676/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/8840f7c5c59fa72e0ba660c84d61456e42100f73cb1a4264514dd61e56f6fa66/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/57c3ffa344cc9d29258adcc64b8ddd95de1008fea4fd331d85022de56e4dec95/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/91711ef80ad25e413c6a6304ad487ce71652b46b839ed9c5e0828a30c452b03b/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/48a11d4dbfb229c8c14fdd613bdb330f4dc444d3cf63dc384efd91d252eb7775/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/93740d36672045e1cf873fb04d52953c5089865f2329c7b70908471b07f6f4a7/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/7431d6bfd8f5bb127002010982e4f57a5ebfc3680c838e5795fcb94f7bcc5153/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/3ed90baffed4c0eb203521712b200848ff80c3c6d39f16440f4e20ad636f4ba8/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/b278248fb41caa74fd075736ef9f72167a871800c1942077c7721b971deba978/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/ec5dde60adba07e2a00e6d0d50068a8883d3352e36a15006eee9b7f07515e836/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/be35508865a535211d36873d5ccecebeb2e924982be92ba9a7494d67cfff7517/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/a2d3edc029ef694c6831b54b7f2d7b91fc9c66b5470a2a4fc12ad26106274da3/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-328:{mountpoint:/var/lib/containers/storage/overlay/668fd3110347728da7d51e81797015426a88c4d42e5dc331a3c11fa242d5edc0/merged major:0 minor:328 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/81d1ac5201bfe39be395a485d6abf4eb2d056ea036e98439e9471776bd193ca6/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/ac6e05893e3c11297d400536dd7733a11246790fca83e727647f9f1403794d65/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/435b61e309422c351f5c353a92b2ffbdc8f0902c43defdacde51f5d03be72d37/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/bd2a621e80600a9303a97ddd106a10236f04a6c1253c02425bb7ba517fcbeb09/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-350:{mountpoint:/var/lib/containers/storage/overlay/d15ccc3178b8120ebae26f99ad724b3f1c1c6b0e22d95cb6e4422c34399bd1b1/merged major:0 minor:350 fsType:overlay blockSize:0} overlay_0-381:{mountpoint:/var/lib/containers/storage/overlay/b74a69d283ddbbce0a45800fd2f5b8159ddcc802e70c714ff54f7d8775d843a7/merged major:0 minor:381 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/b0fa71ce4004282a7fa567f64ae9badf631e1ec4d13d2d1a4c534a69e1133131/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/c664daebae420c7921280db532cf574735cf09c4c735aa1eea63df8ba98326f9/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/b0daec689fd3f186ef0c3483c60314dede5b36dfad80c2e314121cfd9d0362f4/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/145db2ef88cd3c072d348f18225e1bf1ccbb6a0c2ed1db75c37f38c7c4c5331b/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/6ed460fee8ae383e7cb5dc2cdf1fada68466fc89521fa5fb139040cd5cb78dd9/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/cf2a46c8402d9441a83e1d9d39c3d851ffd7071f77f7786a9219e3e17acf2b9c/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/337691cee206dd7fd2b86c125e7c1dafb01aa8e6fef8a219db5c7e0703fceaae/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/02197374367f5b6603c783b41af3134f0dfc04f456de41fb0c29f3c3ab4e7117/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/e926e4145d313b87c79a514954fafea975054e123487b05efc43218c87173296/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/d2105b93f06a3dd56a47dec0489569d487c4ba9a17fafef65b5f1243ae34e03d/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/972feb00919613e5a4912e2cc3c1c713efd0f0f0bd93b0dcfcff94ee033ca1f7/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/b255226867d50b0d0e105e8c3ab3ad599e445e4f9a10c756d80edbbe9df5a555/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/ea1b18454652d13c480aba387f5c612514d08c282e751f635a850f0a529499c0/merged major:0 minor:82 fsType:overlay blockSize:0}] Mar 18 09:54:43.663408 master-0 kubenswrapper[8244]: I0318 09:54:43.662639 8244 manager.go:217] Machine: {Timestamp:2026-03-18 09:54:43.661554632 +0000 UTC m=+0.141290780 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:2ce24ad926944999b07b278206f0e4a4 SystemUUID:2ce24ad9-2694-4999-b07b-278206f0e4a4 BootID:b58383dd-cfef-45af-ac7b-26a609b46986 Filesystems:[{Device:/var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~projected/kube-api-access-lhzg4 DeviceMajor:0 DeviceMinor:240 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf5a74b27454f5e8b1c18f8ef6d030c5b30a033cbc5baf882408ad3e065176ae/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8cb5158f-2199-42c0-995a-8490c9ec8a95/volumes/kubernetes.io~projected/kube-api-access-p2chb DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dfd0e7e42052e04911701599adae500aa7e091be93bca4bd99512045dd966402/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-350 DeviceMajor:0 DeviceMinor:350 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~projected/kube-api-access-g6bvr DeviceMajor:0 DeviceMinor:237 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2cf1bdb8eb09b95692725959e60306272582dc358e1d2a541fe6b5b5e57971c0/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/62b82d72-d73c-451a-84e1-551d73036aa8/volumes/kubernetes.io~projected/kube-api-access-lvnrf DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0d84a97391b20bbc1473efdc91b70735c4232a35d2754651bb0243ebf80ab3be/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/74795f5d-dcd7-4723-8931-c34b59ce3087/volumes/kubernetes.io~projected/kube-api-access-8rzsk DeviceMajor:0 DeviceMinor:303 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c669ea9b66a51273cf2d30ced0d0c7e6bfc9166bf41cddcbf86ac434cad57ea6/userdata/shm DeviceMajor:0 DeviceMinor:379 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:210 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/401219e24c2bd7d9e48328027e1c78136e8f25304b76126b40b8362b04997723/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4/volumes/kubernetes.io~projected/kube-api-access-dkzq9 DeviceMajor:0 DeviceMinor:235 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f11956d88039b0b64ae7a326d73a1a29f38de2a62777ca3d744161f04878819/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~projected/kube-api-access-p5dk8 DeviceMajor:0 DeviceMinor:233 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-381 DeviceMajor:0 DeviceMinor:381 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0442ec6c-5973-40a5-a0c3-dc02de46d343/volumes/kubernetes.io~projected/kube-api-access-5x6ht DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~projected/kube-api-access-cxj5c DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-139 DeviceMajor:0 DeviceMinor:139 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~projected/kube-api-access-wj9sq DeviceMajor:0 DeviceMinor:247 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84a3629f241ccd15c8649ba629b3be31e2785a3b2224bbe09e95e6dbad4b5613/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~projected/kube-api-access-zlxfz DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-141 DeviceMajor:0 DeviceMinor:141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/db52ca42-e458-407f-9eeb-bf6de6405edc/volumes/kubernetes.io~projected/kube-api-access-jx9p2 DeviceMajor:0 DeviceMinor:239 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d9a9cd3f2878ec84a255f5f74dc3526f3a1623550d44547c9ce47a07a51bb959/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ee376320-9ca0-444d-ab37-9cbcb6729b11/volumes/kubernetes.io~projected/kube-api-access-25k9g DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~projected/kube-api-access-cxv6v DeviceMajor:0 DeviceMinor:228 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~projected/kube-api-access-9fjk8 DeviceMajor:0 DeviceMinor:238 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f69a00b6-d908-4485-bb0d-57594fc01d24/volumes/kubernetes.io~projected/kube-api-access-5r7qd DeviceMajor:0 DeviceMinor:244 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d4d2218c-f9df-4d43-8727-ed3a920e23f7/volumes/kubernetes.io~projected/kube-api-access-w4qp9 DeviceMajor:0 DeviceMinor:234 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/kube-api-access-nwfph DeviceMajor:0 DeviceMinor:242 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91331360-dc70-45bb-a815-e00664bae6c4/volumes/kubernetes.io~projected/kube-api-access-8w8sl DeviceMajor:0 DeviceMinor:118 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ee46779ae89b4ca2573c0db3f08f40bcd1f36bd939f6b097aaa8ab0676c68690/userdata/shm DeviceMajor:0 DeviceMinor:250 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/932a70df-3afe-4873-9449-ab6e061d3fe3/volumes/kubernetes.io~projected/kube-api-access-fv8x5 DeviceMajor:0 DeviceMinor:383 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/03de1ea6-da57-4e13-8e5a-d5e10a9f9957/volumes/kubernetes.io~projected/kube-api-access-hcj8f DeviceMajor:0 DeviceMinor:105 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8e812dd9-cd05-4e9e-8710-d0920181ece2/volumes/kubernetes.io~projected/kube-api-access-s54f9 DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ad9370766ae18aa384f3b2f07e9d3cada2bbe156f6bcba4f02016b49f4e713f/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:231 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~projected/kube-api-access-ghd2r DeviceMajor:0 DeviceMinor:92 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:206 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~projected/kube-api-access-2ktpl DeviceMajor:0 DeviceMinor:147 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa/userdata/shm DeviceMajor:0 DeviceMinor:137 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15f8941b-dba2-40ba-86d5-3318f5b635cc/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:43 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-261 DeviceMajor:0 DeviceMinor:261 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/543fb2147aca575376ed7bd211cfca3f8a0e31f62df5e58bf47f4f7fc11fc303/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-328 DeviceMajor:0 DeviceMinor:328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd0e307b59dcdef36339f9469bcea9ae60dc835b43a1e8b7190883e66520e662/userdata/shm DeviceMajor:0 DeviceMinor:384 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/025ade16-8502-4b71-a4be-f13dee081e3a/volumes/kubernetes.io~projected/kube-api-access-8l4b6 DeviceMajor:0 DeviceMinor:386 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae/volumes/kubernetes.io~projected/kube-api-access-hww8g DeviceMajor:0 DeviceMinor:378 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b58497ff3c8993b13d6f045f9b3aa17b9b5e464305fd642acb69bc40d01db14a/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:127 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/02d02240944e9230fa342b4b1030eceabc9b6ad789e1383eef1d657905cf15af/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3fdec4aed0d4d1e92fcea54e18530bddc4ceb0a577b38a5b2728e046e7e0d8a1/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/613533c3a19224e9e30dba35639ecd39810b8db2f7864917803baa176a7bbed0/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a0f6a23031d96231e99cbb9f2b16dea4d913c0ee0df84104c4f8c08579a04daa/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~projected/kube-api-access-gmffc DeviceMajor:0 DeviceMinor:136 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~projected/kube-api-access-p4hfd DeviceMajor:0 DeviceMinor:245 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:246 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~projected/kube-api-access-f25pg DeviceMajor:0 DeviceMinor:248 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:85 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-130 DeviceMajor:0 DeviceMinor:130 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6f266bad-8b30-4300-ad93-9d48e61f2440/volumes/kubernetes.io~projected/kube-api-access-shbrj DeviceMajor:0 DeviceMinor:241 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/kube-api-access-tb7tz DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cd2ad4fe81a1a347f10f858030eebc98abfffaf65eba926cffe2c8990ddb0614/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:02d02240944e923 MacAddress:b2:6b:74:54:d9:ec Speed:10000 Mtu:8900} {Name:04e5c67b5ae7934 MacAddress:2a:a0:44:95:32:5b Speed:10000 Mtu:8900} {Name:0d84a97391b20bb MacAddress:66:d1:0c:98:39:7e Speed:10000 Mtu:8900} {Name:22baed4d026a2e7 MacAddress:fa:b8:21:d5:56:4e Speed:10000 Mtu:8900} {Name:2cf1bdb8eb09b95 MacAddress:ee:dc:0f:bf:51:b5 Speed:10000 Mtu:8900} {Name:3fdec4aed0d4d1e MacAddress:1e:18:88:c7:09:b3 Speed:10000 Mtu:8900} {Name:543fb2147aca575 MacAddress:5e:b1:db:d5:47:9e Speed:10000 Mtu:8900} {Name:613533c3a19224e MacAddress:c2:9e:38:85:c9:02 Speed:10000 Mtu:8900} {Name:84a3629f241ccd1 MacAddress:82:5a:97:c8:d9:61 Speed:10000 Mtu:8900} {Name:8f11956d88039b0 MacAddress:8e:c4:fa:f0:7b:a2 Speed:10000 Mtu:8900} {Name:983b16a4206de1f MacAddress:56:6a:8d:7d:81:45 Speed:10000 Mtu:8900} {Name:a0f6a23031d9623 MacAddress:a2:14:43:e7:30:80 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:4e:a6:8c:3a:8f:7c Speed:0 Mtu:8900} {Name:c669ea9b66a5127 MacAddress:5e:95:31:cb:66:e2 Speed:10000 Mtu:8900} {Name:dd0e307b59dcdef MacAddress:42:a9:19:cd:0b:eb Speed:10000 Mtu:8900} {Name:ee46779ae89b4ca MacAddress:76:2c:03:58:1c:a7 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:50:e9:f6 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:26:48:4a:2c:71:6e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 09:54:43.663408 master-0 kubenswrapper[8244]: I0318 09:54:43.663376 8244 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 09:54:43.663408 master-0 kubenswrapper[8244]: I0318 09:54:43.663508 8244 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 09:54:43.664449 master-0 kubenswrapper[8244]: I0318 09:54:43.663790 8244 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 09:54:43.664449 master-0 kubenswrapper[8244]: I0318 09:54:43.663971 8244 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 09:54:43.664449 master-0 kubenswrapper[8244]: I0318 09:54:43.664000 8244 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 09:54:43.664449 master-0 kubenswrapper[8244]: I0318 09:54:43.664265 8244 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 09:54:43.664449 master-0 kubenswrapper[8244]: I0318 09:54:43.664279 8244 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 09:54:43.664449 master-0 kubenswrapper[8244]: I0318 09:54:43.664290 8244 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 09:54:43.664449 master-0 kubenswrapper[8244]: I0318 09:54:43.664318 8244 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 09:54:43.664449 master-0 kubenswrapper[8244]: I0318 09:54:43.664401 8244 state_mem.go:36] "Initialized new in-memory state store" Mar 18 09:54:43.664817 master-0 kubenswrapper[8244]: I0318 09:54:43.664481 8244 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 09:54:43.664817 master-0 kubenswrapper[8244]: I0318 09:54:43.664561 8244 kubelet.go:418] "Attempting to sync node with API server" Mar 18 09:54:43.664817 master-0 kubenswrapper[8244]: I0318 09:54:43.664580 8244 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 09:54:43.664817 master-0 kubenswrapper[8244]: I0318 09:54:43.664599 8244 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 09:54:43.664817 master-0 kubenswrapper[8244]: I0318 09:54:43.664612 8244 kubelet.go:324] "Adding apiserver pod source" Mar 18 09:54:43.664817 master-0 kubenswrapper[8244]: I0318 09:54:43.664635 8244 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 09:54:43.666054 master-0 kubenswrapper[8244]: I0318 09:54:43.665972 8244 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 09:54:43.666439 master-0 kubenswrapper[8244]: I0318 09:54:43.666412 8244 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 09:54:43.666861 master-0 kubenswrapper[8244]: I0318 09:54:43.666802 8244 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 09:54:43.667061 master-0 kubenswrapper[8244]: I0318 09:54:43.667025 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 09:54:43.667061 master-0 kubenswrapper[8244]: I0318 09:54:43.667051 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 09:54:43.667061 master-0 kubenswrapper[8244]: I0318 09:54:43.667059 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 09:54:43.667061 master-0 kubenswrapper[8244]: I0318 09:54:43.667068 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 09:54:43.667267 master-0 kubenswrapper[8244]: I0318 09:54:43.667077 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 09:54:43.667267 master-0 kubenswrapper[8244]: I0318 09:54:43.667086 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 09:54:43.667267 master-0 kubenswrapper[8244]: I0318 09:54:43.667094 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 09:54:43.667267 master-0 kubenswrapper[8244]: I0318 09:54:43.667102 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 09:54:43.667267 master-0 kubenswrapper[8244]: I0318 09:54:43.667112 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 09:54:43.667267 master-0 kubenswrapper[8244]: I0318 09:54:43.667120 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 09:54:43.667267 master-0 kubenswrapper[8244]: I0318 09:54:43.667133 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 09:54:43.667267 master-0 kubenswrapper[8244]: I0318 09:54:43.667148 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 09:54:43.667267 master-0 kubenswrapper[8244]: I0318 09:54:43.667185 8244 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 09:54:43.668757 master-0 kubenswrapper[8244]: I0318 09:54:43.667707 8244 server.go:1280] "Started kubelet" Mar 18 09:54:43.668757 master-0 kubenswrapper[8244]: I0318 09:54:43.667783 8244 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 09:54:43.668757 master-0 kubenswrapper[8244]: I0318 09:54:43.667887 8244 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 09:54:43.668757 master-0 kubenswrapper[8244]: I0318 09:54:43.667991 8244 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 09:54:43.668757 master-0 kubenswrapper[8244]: I0318 09:54:43.668452 8244 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 09:54:43.669380 master-0 kubenswrapper[8244]: I0318 09:54:43.669352 8244 server.go:449] "Adding debug handlers to kubelet server" Mar 18 09:54:43.671507 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 09:54:43.679105 master-0 kubenswrapper[8244]: I0318 09:54:43.678793 8244 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 09:54:43.679306 master-0 kubenswrapper[8244]: I0318 09:54:43.679188 8244 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 09:54:43.679587 master-0 kubenswrapper[8244]: I0318 09:54:43.679555 8244 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 09:54:43.679671 master-0 kubenswrapper[8244]: I0318 09:54:43.679636 8244 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 09:54:43.680110 master-0 kubenswrapper[8244]: I0318 09:54:43.679909 8244 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 09:43:17 +0000 UTC, rotation deadline is 2026-03-19 03:00:46.848672955 +0000 UTC Mar 18 09:54:43.680110 master-0 kubenswrapper[8244]: I0318 09:54:43.679949 8244 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h6m3.168726801s for next certificate rotation Mar 18 09:54:43.680110 master-0 kubenswrapper[8244]: I0318 09:54:43.680035 8244 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 09:54:43.680110 master-0 kubenswrapper[8244]: I0318 09:54:43.680043 8244 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 09:54:43.683879 master-0 kubenswrapper[8244]: I0318 09:54:43.683212 8244 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 09:54:43.684008 master-0 kubenswrapper[8244]: I0318 09:54:43.683961 8244 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 09:54:43.684008 master-0 kubenswrapper[8244]: I0318 09:54:43.683995 8244 factory.go:55] Registering systemd factory Mar 18 09:54:43.684008 master-0 kubenswrapper[8244]: I0318 09:54:43.684007 8244 factory.go:221] Registration of the systemd container factory successfully Mar 18 09:54:43.687901 master-0 kubenswrapper[8244]: I0318 09:54:43.684319 8244 factory.go:153] Registering CRI-O factory Mar 18 09:54:43.687901 master-0 kubenswrapper[8244]: I0318 09:54:43.684347 8244 factory.go:221] Registration of the crio container factory successfully Mar 18 09:54:43.687901 master-0 kubenswrapper[8244]: I0318 09:54:43.684373 8244 factory.go:103] Registering Raw factory Mar 18 09:54:43.687901 master-0 kubenswrapper[8244]: I0318 09:54:43.684391 8244 manager.go:1196] Started watching for new ooms in manager Mar 18 09:54:43.687901 master-0 kubenswrapper[8244]: I0318 09:54:43.684798 8244 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 09:54:43.687901 master-0 kubenswrapper[8244]: I0318 09:54:43.685066 8244 manager.go:319] Starting recovery of all containers Mar 18 09:54:43.697762 master-0 kubenswrapper[8244]: I0318 09:54:43.697645 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-ca" seLinuxMountContext="" Mar 18 09:54:43.697762 master-0 kubenswrapper[8244]: I0318 09:54:43.697728 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.697762 master-0 kubenswrapper[8244]: I0318 09:54:43.697744 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4d2218c-f9df-4d43-8727-ed3a920e23f7" volumeName="kubernetes.io/projected/d4d2218c-f9df-4d43-8727-ed3a920e23f7-kube-api-access-w4qp9" seLinuxMountContext="" Mar 18 09:54:43.697762 master-0 kubenswrapper[8244]: I0318 09:54:43.697756 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0999f781-3299-4cb6-ba76-2a4f4584c685" volumeName="kubernetes.io/secret/0999f781-3299-4cb6-ba76-2a4f4584c685-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.697762 master-0 kubenswrapper[8244]: I0318 09:54:43.697770 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" volumeName="kubernetes.io/configmap/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-config" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697783 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" volumeName="kubernetes.io/projected/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-kube-api-access-wj9sq" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697800 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" volumeName="kubernetes.io/secret/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697811 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/projected/f076eaf0-b041-4db0-ba06-3d85e23bb654-kube-api-access-f25pg" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697867 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03de1ea6-da57-4e13-8e5a-d5e10a9f9957" volumeName="kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cni-binary-copy" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697880 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="accc57fb-75f5-4f89-9804-6ede7f77e27c" volumeName="kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-kube-api-access-nwfph" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697897 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74795f5d-dcd7-4723-8931-c34b59ce3087" volumeName="kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697967 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-service-ca" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697978 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" volumeName="kubernetes.io/secret/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697990 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" volumeName="kubernetes.io/projected/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-kube-api-access-p5dk8" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.697999 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-etcd-client" seLinuxMountContext="" Mar 18 09:54:43.698022 master-0 kubenswrapper[8244]: I0318 09:54:43.698034 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2635254-a491-42e5-b598-461c24bf77ca" volumeName="kubernetes.io/projected/c2635254-a491-42e5-b598-461c24bf77ca-kube-api-access-p4hfd" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698047 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0605021-862d-424a-a4c1-037fb005b77e" volumeName="kubernetes.io/projected/d0605021-862d-424a-a4c1-037fb005b77e-kube-api-access-cxj5c" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698058 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db52ca42-e458-407f-9eeb-bf6de6405edc" volumeName="kubernetes.io/projected/db52ca42-e458-407f-9eeb-bf6de6405edc-kube-api-access-jx9p2" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698068 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d72e695-0183-4ee8-8add-5425e67f7138" volumeName="kubernetes.io/secret/0d72e695-0183-4ee8-8add-5425e67f7138-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698117 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a616d-012a-479e-ab3d-b21295ea1805" volumeName="kubernetes.io/configmap/6a6a616d-012a-479e-ab3d-b21295ea1805-config" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698139 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb35841e-d992-4044-aaaa-06c9faf47bd0" volumeName="kubernetes.io/projected/bb35841e-d992-4044-aaaa-06c9faf47bd0-kube-api-access-zlxfz" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698153 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2635254-a491-42e5-b598-461c24bf77ca" volumeName="kubernetes.io/configmap/c2635254-a491-42e5-b598-461c24bf77ca-trusted-ca" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698167 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" volumeName="kubernetes.io/empty-dir/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-operand-assets" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698203 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0999f781-3299-4cb6-ba76-2a4f4584c685" volumeName="kubernetes.io/projected/0999f781-3299-4cb6-ba76-2a4f4584c685-kube-api-access" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698221 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ee99294-4785-49d0-b493-0d734cf09396" volumeName="kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-kube-api-access-tb7tz" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698236 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" volumeName="kubernetes.io/projected/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-kube-api-access" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698253 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91331360-dc70-45bb-a815-e00664bae6c4" volumeName="kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-binary-copy" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698294 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91331360-dc70-45bb-a815-e00664bae6c4" volumeName="kubernetes.io/projected/91331360-dc70-45bb-a815-e00664bae6c4-kube-api-access-8w8sl" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698310 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" volumeName="kubernetes.io/secret/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698320 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f69a00b6-d908-4485-bb0d-57594fc01d24" volumeName="kubernetes.io/projected/f69a00b6-d908-4485-bb0d-57594fc01d24-kube-api-access-5r7qd" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698330 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0999f781-3299-4cb6-ba76-2a4f4584c685" volumeName="kubernetes.io/configmap/0999f781-3299-4cb6-ba76-2a4f4584c685-config" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698340 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15f8941b-dba2-40ba-86d5-3318f5b635cc" volumeName="kubernetes.io/configmap/15f8941b-dba2-40ba-86d5-3318f5b635cc-service-ca" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698367 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ee99294-4785-49d0-b493-0d734cf09396" volumeName="kubernetes.io/configmap/8ee99294-4785-49d0-b493-0d734cf09396-trusted-ca" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698379 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91331360-dc70-45bb-a815-e00664bae6c4" volumeName="kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698413 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-config" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698440 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-config" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698452 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/projected/a078565a-6970-4f42-84f4-938f1d637245-kube-api-access-cxv6v" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698468 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="accc57fb-75f5-4f89-9804-6ede7f77e27c" volumeName="kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-bound-sa-token" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698478 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03de1ea6-da57-4e13-8e5a-d5e10a9f9957" volumeName="kubernetes.io/projected/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-kube-api-access-hcj8f" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698488 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8126b78e-d1e4-4de7-a71d-ebc9fa0afdae" volumeName="kubernetes.io/projected/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae-kube-api-access-hww8g" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698514 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" volumeName="kubernetes.io/projected/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-kube-api-access-dkzq9" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698525 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f69a00b6-d908-4485-bb0d-57594fc01d24" volumeName="kubernetes.io/configmap/f69a00b6-d908-4485-bb0d-57594fc01d24-telemetry-config" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698536 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb942756-bac7-414d-b179-cebdce588a13" volumeName="kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698548 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb942756-bac7-414d-b179-cebdce588a13" volumeName="kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698557 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" volumeName="kubernetes.io/secret/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698566 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15f8941b-dba2-40ba-86d5-3318f5b635cc" volumeName="kubernetes.io/projected/15f8941b-dba2-40ba-86d5-3318f5b635cc-kube-api-access" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698596 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e812dd9-cd05-4e9e-8710-d0920181ece2" volumeName="kubernetes.io/projected/8e812dd9-cd05-4e9e-8710-d0920181ece2-kube-api-access-s54f9" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698606 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb942756-bac7-414d-b179-cebdce588a13" volumeName="kubernetes.io/projected/bb942756-bac7-414d-b179-cebdce588a13-kube-api-access-2ktpl" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698616 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03de1ea6-da57-4e13-8e5a-d5e10a9f9957" volumeName="kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-daemon-config" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698629 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" volumeName="kubernetes.io/empty-dir/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-available-featuregates" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698639 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/secret/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698649 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a616d-012a-479e-ab3d-b21295ea1805" volumeName="kubernetes.io/projected/6a6a616d-012a-479e-ab3d-b21295ea1805-kube-api-access" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698680 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f266bad-8b30-4300-ad93-9d48e61f2440" volumeName="kubernetes.io/projected/6f266bad-8b30-4300-ad93-9d48e61f2440-kube-api-access-shbrj" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698691 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0605021-862d-424a-a4c1-037fb005b77e" volumeName="kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-env-overrides" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698701 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698712 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f266bad-8b30-4300-ad93-9d48e61f2440" volumeName="kubernetes.io/configmap/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698725 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cb5158f-2199-42c0-995a-8490c9ec8a95" volumeName="kubernetes.io/projected/8cb5158f-2199-42c0-995a-8490c9ec8a95-kube-api-access-p2chb" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698751 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="932a70df-3afe-4873-9449-ab6e061d3fe3" volumeName="kubernetes.io/projected/932a70df-3afe-4873-9449-ab6e061d3fe3-kube-api-access-fv8x5" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698770 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee376320-9ca0-444d-ab37-9cbcb6729b11" volumeName="kubernetes.io/projected/ee376320-9ca0-444d-ab37-9cbcb6729b11-kube-api-access-25k9g" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698784 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" volumeName="kubernetes.io/configmap/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-config" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698801 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" volumeName="kubernetes.io/secret/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698843 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ccdc221-4ec5-487e-8ec4-85284ed628d8" volumeName="kubernetes.io/projected/9ccdc221-4ec5-487e-8ec4-85284ed628d8-kube-api-access-ghd2r" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698856 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb35841e-d992-4044-aaaa-06c9faf47bd0" volumeName="kubernetes.io/configmap/bb35841e-d992-4044-aaaa-06c9faf47bd0-config" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698867 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0605021-862d-424a-a4c1-037fb005b77e" volumeName="kubernetes.io/secret/d0605021-862d-424a-a4c1-037fb005b77e-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698880 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" volumeName="kubernetes.io/projected/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-kube-api-access-lhzg4" seLinuxMountContext="" Mar 18 09:54:43.698766 master-0 kubenswrapper[8244]: I0318 09:54:43.698891 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-service-ca-bundle" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.698921 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ee99294-4785-49d0-b493-0d734cf09396" volumeName="kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-bound-sa-token" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.698933 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91331360-dc70-45bb-a815-e00664bae6c4" volumeName="kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.698943 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-config" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.698956 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/projected/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-kube-api-access-gmffc" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.698967 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb942756-bac7-414d-b179-cebdce588a13" volumeName="kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.698978 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" volumeName="kubernetes.io/projected/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-kube-api-access-9fjk8" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699118 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62b82d72-d73c-451a-84e1-551d73036aa8" volumeName="kubernetes.io/projected/62b82d72-d73c-451a-84e1-551d73036aa8-kube-api-access-lvnrf" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699134 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ccdc221-4ec5-487e-8ec4-85284ed628d8" volumeName="kubernetes.io/secret/9ccdc221-4ec5-487e-8ec4-85284ed628d8-metrics-tls" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699151 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-env-overrides" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699161 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-script-lib" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699239 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="accc57fb-75f5-4f89-9804-6ede7f77e27c" volumeName="kubernetes.io/configmap/accc57fb-75f5-4f89-9804-6ede7f77e27c-trusted-ca" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699524 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0442ec6c-5973-40a5-a0c3-dc02de46d343" volumeName="kubernetes.io/projected/0442ec6c-5973-40a5-a0c3-dc02de46d343-kube-api-access-5x6ht" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699540 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d72e695-0183-4ee8-8add-5425e67f7138" volumeName="kubernetes.io/projected/0d72e695-0183-4ee8-8add-5425e67f7138-kube-api-access-g6bvr" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699572 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a616d-012a-479e-ab3d-b21295ea1805" volumeName="kubernetes.io/secret/6a6a616d-012a-479e-ab3d-b21295ea1805-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699587 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb35841e-d992-4044-aaaa-06c9faf47bd0" volumeName="kubernetes.io/secret/bb35841e-d992-4044-aaaa-06c9faf47bd0-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699598 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/secret/f076eaf0-b041-4db0-ba06-3d85e23bb654-serving-cert" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699610 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="025ade16-8502-4b71-a4be-f13dee081e3a" volumeName="kubernetes.io/projected/025ade16-8502-4b71-a4be-f13dee081e3a-kube-api-access-8l4b6" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699627 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d72e695-0183-4ee8-8add-5425e67f7138" volumeName="kubernetes.io/configmap/0d72e695-0183-4ee8-8add-5425e67f7138-config" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.699886 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" volumeName="kubernetes.io/configmap/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-config" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.700106 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62b82d72-d73c-451a-84e1-551d73036aa8" volumeName="kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.700145 8244 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0605021-862d-424a-a4c1-037fb005b77e" volumeName="kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-ovnkube-config" seLinuxMountContext="" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.700423 8244 reconstruct.go:97] "Volume reconstruction finished" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.700437 8244 reconciler.go:26] "Reconciler: start to sync state" Mar 18 09:54:43.705174 master-0 kubenswrapper[8244]: I0318 09:54:43.704257 8244 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 09:54:43.728189 master-0 kubenswrapper[8244]: I0318 09:54:43.728057 8244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 09:54:43.731719 master-0 kubenswrapper[8244]: I0318 09:54:43.731678 8244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 09:54:43.731719 master-0 kubenswrapper[8244]: I0318 09:54:43.731718 8244 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 09:54:43.731862 master-0 kubenswrapper[8244]: I0318 09:54:43.731741 8244 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 09:54:43.731862 master-0 kubenswrapper[8244]: E0318 09:54:43.731781 8244 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 09:54:43.733665 master-0 kubenswrapper[8244]: I0318 09:54:43.733632 8244 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 09:54:43.742895 master-0 kubenswrapper[8244]: I0318 09:54:43.742771 8244 generic.go:334] "Generic (PLEG): container finished" podID="3796179a-f6c1-4f97-a2e1-d32106a5d8e9" containerID="bd008f41fdcd1da5525afb4e170a05e1a1f3c337467181cdcfc21b203b5549da" exitCode=0 Mar 18 09:54:43.759115 master-0 kubenswrapper[8244]: I0318 09:54:43.757956 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xgdvw_03de1ea6-da57-4e13-8e5a-d5e10a9f9957/kube-multus/0.log" Mar 18 09:54:43.759115 master-0 kubenswrapper[8244]: I0318 09:54:43.759107 8244 generic.go:334] "Generic (PLEG): container finished" podID="03de1ea6-da57-4e13-8e5a-d5e10a9f9957" containerID="2da220e2852846e9b471d19bf3329629d81b1d881746691dfdddb60fd750adba" exitCode=1 Mar 18 09:54:43.763526 master-0 kubenswrapper[8244]: I0318 09:54:43.763490 8244 generic.go:334] "Generic (PLEG): container finished" podID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerID="d957302f7adb981277fbf539c8fb8ba8b510cdf036ae3b42bb11275306e467ec" exitCode=0 Mar 18 09:54:43.766631 master-0 kubenswrapper[8244]: I0318 09:54:43.766512 8244 generic.go:334] "Generic (PLEG): container finished" podID="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" containerID="ded65abc153650de9d5b3f05283a7442214a212644c7845fac73ca03c4499d84" exitCode=0 Mar 18 09:54:43.775850 master-0 kubenswrapper[8244]: I0318 09:54:43.775777 8244 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="0538eb942c1197a086b3273af768571780d6d5af303141476810f1cd7daec3cc" exitCode=0 Mar 18 09:54:43.775850 master-0 kubenswrapper[8244]: I0318 09:54:43.775809 8244 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="f03028f16df79cfb2d65134dc28295edb8b443255b855706b86769e87e1604c6" exitCode=0 Mar 18 09:54:43.775850 master-0 kubenswrapper[8244]: I0318 09:54:43.775834 8244 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="8de3d5cda49c071629c169597f57fc4a39ffa0565faf4afa9da96f88d8b22b28" exitCode=0 Mar 18 09:54:43.775850 master-0 kubenswrapper[8244]: I0318 09:54:43.775844 8244 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="ba4b50efa1c5a3ef4b380af81a12c8288cb0cec49cd61d28198db983936b1f94" exitCode=0 Mar 18 09:54:43.775850 master-0 kubenswrapper[8244]: I0318 09:54:43.775853 8244 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="160626554dc940cedbe7ec0ddb596f31e480d63196f634936e05702f85c45819" exitCode=0 Mar 18 09:54:43.775850 master-0 kubenswrapper[8244]: I0318 09:54:43.775863 8244 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="8ef686cc40f68aff82f23ce87e06ff13fba380e3cd6b61b827160c9e73c4cbbc" exitCode=0 Mar 18 09:54:43.783124 master-0 kubenswrapper[8244]: I0318 09:54:43.783070 8244 generic.go:334] "Generic (PLEG): container finished" podID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerID="981f5359f2b3c5ba98385487e0fffb3f9c331fb34bb0e106e475367f63bb51f9" exitCode=0 Mar 18 09:54:43.798216 master-0 kubenswrapper[8244]: I0318 09:54:43.794353 8244 generic.go:334] "Generic (PLEG): container finished" podID="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" containerID="273c8765db6facd550b6e56f450546d9b1b71f8e90628bc1352e6d3fe67f7a08" exitCode=0 Mar 18 09:54:43.803703 master-0 kubenswrapper[8244]: I0318 09:54:43.803660 8244 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="0e1b90509e26fef960c00500d9ad97c317d8639e8d0264437904c7c3c438399a" exitCode=0 Mar 18 09:54:43.831929 master-0 kubenswrapper[8244]: E0318 09:54:43.831882 8244 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 09:54:43.838252 master-0 kubenswrapper[8244]: I0318 09:54:43.838147 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 09:54:43.838597 master-0 kubenswrapper[8244]: I0318 09:54:43.838549 8244 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b" exitCode=1 Mar 18 09:54:43.838597 master-0 kubenswrapper[8244]: I0318 09:54:43.838589 8244 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="a88536111853576d542216418fa9e6a7c0a796244d77dbfb3568461d1ad235ad" exitCode=0 Mar 18 09:54:43.905869 master-0 kubenswrapper[8244]: I0318 09:54:43.905814 8244 manager.go:324] Recovery completed Mar 18 09:54:43.971262 master-0 kubenswrapper[8244]: I0318 09:54:43.971164 8244 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 09:54:43.971262 master-0 kubenswrapper[8244]: I0318 09:54:43.971239 8244 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 09:54:43.971262 master-0 kubenswrapper[8244]: I0318 09:54:43.971276 8244 state_mem.go:36] "Initialized new in-memory state store" Mar 18 09:54:43.972167 master-0 kubenswrapper[8244]: I0318 09:54:43.972127 8244 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 09:54:43.972220 master-0 kubenswrapper[8244]: I0318 09:54:43.972163 8244 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 09:54:43.972220 master-0 kubenswrapper[8244]: I0318 09:54:43.972209 8244 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 09:54:43.972275 master-0 kubenswrapper[8244]: I0318 09:54:43.972237 8244 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 09:54:43.972275 master-0 kubenswrapper[8244]: I0318 09:54:43.972251 8244 policy_none.go:49] "None policy: Start" Mar 18 09:54:43.975583 master-0 kubenswrapper[8244]: I0318 09:54:43.975550 8244 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 09:54:43.975632 master-0 kubenswrapper[8244]: I0318 09:54:43.975586 8244 state_mem.go:35] "Initializing new in-memory state store" Mar 18 09:54:43.975835 master-0 kubenswrapper[8244]: I0318 09:54:43.975796 8244 state_mem.go:75] "Updated machine memory state" Mar 18 09:54:43.975835 master-0 kubenswrapper[8244]: I0318 09:54:43.975814 8244 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 09:54:43.996446 master-0 kubenswrapper[8244]: I0318 09:54:43.996410 8244 manager.go:334] "Starting Device Plugin manager" Mar 18 09:54:43.996644 master-0 kubenswrapper[8244]: I0318 09:54:43.996473 8244 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 09:54:43.996644 master-0 kubenswrapper[8244]: I0318 09:54:43.996488 8244 server.go:79] "Starting device plugin registration server" Mar 18 09:54:43.997443 master-0 kubenswrapper[8244]: I0318 09:54:43.997102 8244 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 09:54:43.997443 master-0 kubenswrapper[8244]: I0318 09:54:43.997220 8244 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 09:54:43.997585 master-0 kubenswrapper[8244]: I0318 09:54:43.997554 8244 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 09:54:43.997695 master-0 kubenswrapper[8244]: I0318 09:54:43.997668 8244 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 09:54:43.997738 master-0 kubenswrapper[8244]: I0318 09:54:43.997705 8244 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 09:54:44.032519 master-0 kubenswrapper[8244]: I0318 09:54:44.032172 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 09:54:44.034588 master-0 kubenswrapper[8244]: I0318 09:54:44.034532 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4da9f8c70f1716c5e032f09a6a5017ac3987811ec91a138b7a837bbb86e4f381" Mar 18 09:54:44.034658 master-0 kubenswrapper[8244]: I0318 09:54:44.034583 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"5230f2c731392582b4c5b7f1d1739dca596269f4bff091decf0daf9fa0a42c23"} Mar 18 09:54:44.034705 master-0 kubenswrapper[8244]: I0318 09:54:44.034666 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"5ad9370766ae18aa384f3b2f07e9d3cada2bbe156f6bcba4f02016b49f4e713f"} Mar 18 09:54:44.034750 master-0 kubenswrapper[8244]: I0318 09:54:44.034703 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c985fd1643f6c6fd8181176e1149d324515647d1a390abe33081b9ded6959a0f" Mar 18 09:54:44.034788 master-0 kubenswrapper[8244]: I0318 09:54:44.034751 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"a8a79bb9813c53d6a7944ac3a61efc1cc0406057f3915265e59c26643cc48a9e"} Mar 18 09:54:44.034788 master-0 kubenswrapper[8244]: I0318 09:54:44.034765 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"0f4ef82cd98a641ac2372a9202df576de9d16287dc2775cc6c0529b93f52b3e6"} Mar 18 09:54:44.034788 master-0 kubenswrapper[8244]: I0318 09:54:44.034776 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"cd2ad4fe81a1a347f10f858030eebc98abfffaf65eba926cffe2c8990ddb0614"} Mar 18 09:54:44.034891 master-0 kubenswrapper[8244]: I0318 09:54:44.034814 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"f94e501b0ad12236c03bc538f983952a18a8058deb0777210379742bce193fde"} Mar 18 09:54:44.034891 master-0 kubenswrapper[8244]: I0318 09:54:44.034843 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"5a898e220fc5eed6a4a32559913535749eb16cc2a7cd17e978e4c62aa7e6452a"} Mar 18 09:54:44.034891 master-0 kubenswrapper[8244]: I0318 09:54:44.034856 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"0e1b90509e26fef960c00500d9ad97c317d8639e8d0264437904c7c3c438399a"} Mar 18 09:54:44.034891 master-0 kubenswrapper[8244]: I0318 09:54:44.034872 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"401219e24c2bd7d9e48328027e1c78136e8f25304b76126b40b8362b04997723"} Mar 18 09:54:44.034891 master-0 kubenswrapper[8244]: I0318 09:54:44.034888 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5759283fba42fbbd311783807000d9d77eaa5d0bcefb9d4dbe9eb43e6dbcd178" Mar 18 09:54:44.035020 master-0 kubenswrapper[8244]: I0318 09:54:44.034899 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57"} Mar 18 09:54:44.035020 master-0 kubenswrapper[8244]: I0318 09:54:44.034912 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5"} Mar 18 09:54:44.035020 master-0 kubenswrapper[8244]: I0318 09:54:44.034923 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"cf5a74b27454f5e8b1c18f8ef6d030c5b30a033cbc5baf882408ad3e065176ae"} Mar 18 09:54:44.035020 master-0 kubenswrapper[8244]: I0318 09:54:44.034945 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"b4356aff744ddd84b751a19b6b1c926a7d4c3a2ecf0278ac7c42e1a78ef7db64"} Mar 18 09:54:44.035020 master-0 kubenswrapper[8244]: I0318 09:54:44.034958 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b"} Mar 18 09:54:44.035020 master-0 kubenswrapper[8244]: I0318 09:54:44.034970 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"a88536111853576d542216418fa9e6a7c0a796244d77dbfb3568461d1ad235ad"} Mar 18 09:54:44.035020 master-0 kubenswrapper[8244]: I0318 09:54:44.034978 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd"} Mar 18 09:54:44.045089 master-0 kubenswrapper[8244]: E0318 09:54:44.045042 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:54:44.045300 master-0 kubenswrapper[8244]: E0318 09:54:44.045283 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.045368 master-0 kubenswrapper[8244]: E0318 09:54:44.045330 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.045549 master-0 kubenswrapper[8244]: W0318 09:54:44.045523 8244 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 09:54:44.045604 master-0 kubenswrapper[8244]: E0318 09:54:44.045542 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:54:44.045815 master-0 kubenswrapper[8244]: E0318 09:54:44.045618 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:54:44.097698 master-0 kubenswrapper[8244]: I0318 09:54:44.097502 8244 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:54:44.099608 master-0 kubenswrapper[8244]: I0318 09:54:44.099566 8244 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:54:44.099608 master-0 kubenswrapper[8244]: I0318 09:54:44.099594 8244 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:54:44.099608 master-0 kubenswrapper[8244]: I0318 09:54:44.099602 8244 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:54:44.099888 master-0 kubenswrapper[8244]: I0318 09:54:44.099661 8244 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106127 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106159 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106177 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106193 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106209 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106224 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106238 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106255 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106282 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106296 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106309 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106325 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106338 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106351 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106365 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.106497 master-0 kubenswrapper[8244]: I0318 09:54:44.106379 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.107506 master-0 kubenswrapper[8244]: I0318 09:54:44.106546 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.109769 master-0 kubenswrapper[8244]: I0318 09:54:44.109716 8244 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 09:54:44.109969 master-0 kubenswrapper[8244]: I0318 09:54:44.109933 8244 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 09:54:44.207291 master-0 kubenswrapper[8244]: I0318 09:54:44.207235 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.207291 master-0 kubenswrapper[8244]: I0318 09:54:44.207282 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.207291 master-0 kubenswrapper[8244]: I0318 09:54:44.207301 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207318 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207334 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207373 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207388 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207404 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207423 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207445 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207465 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207482 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207496 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207522 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207538 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207554 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207571 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.207645 master-0 kubenswrapper[8244]: I0318 09:54:44.207634 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207689 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207710 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207733 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207753 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207772 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207792 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207811 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207847 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207869 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207887 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207914 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207934 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207953 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207973 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.207992 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.208205 master-0 kubenswrapper[8244]: I0318 09:54:44.208010 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:44.666001 master-0 kubenswrapper[8244]: I0318 09:54:44.665950 8244 apiserver.go:52] "Watching apiserver" Mar 18 09:54:44.681735 master-0 kubenswrapper[8244]: I0318 09:54:44.681692 8244 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 09:54:44.684675 master-0 kubenswrapper[8244]: I0318 09:54:44.684484 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc","openshift-multus/multus-xgdvw","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq","openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr","openshift-dns-operator/dns-operator-9c5679d8f-jrmkr","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb","openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f","assisted-installer/assisted-installer-controller-ttq68","openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698","openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k","openshift-ovn-kubernetes/ovnkube-node-frnfl","openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr","kube-system/bootstrap-kube-scheduler-master-0","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq","openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz","openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj","openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m","openshift-multus/multus-additional-cni-plugins-dg6dw","openshift-network-operator/iptables-alerter-r7h65","openshift-config-operator/openshift-config-operator-95bf4f4d-495pg","openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/network-metrics-daemon-tbxt4","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq","openshift-controller-manager/controller-manager-f5df8899c-zvdvg","openshift-marketplace/marketplace-operator-89ccd998f-2glpv","openshift-network-operator/network-operator-7bd846bfc4-8srnz","openshift-service-ca/service-ca-79bc6b8d76-jjcsv","openshift-network-node-identity/network-node-identity-7fl4x","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt","openshift-network-diagnostics/network-check-target-42l55"] Mar 18 09:54:44.684854 master-0 kubenswrapper[8244]: I0318 09:54:44.684784 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 09:54:44.685723 master-0 kubenswrapper[8244]: I0318 09:54:44.685689 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.687938 master-0 kubenswrapper[8244]: I0318 09:54:44.687878 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:44.688006 master-0 kubenswrapper[8244]: I0318 09:54:44.687944 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:44.688053 master-0 kubenswrapper[8244]: I0318 09:54:44.688011 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:44.688053 master-0 kubenswrapper[8244]: I0318 09:54:44.688037 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:44.688282 master-0 kubenswrapper[8244]: I0318 09:54:44.688244 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:44.688726 master-0 kubenswrapper[8244]: I0318 09:54:44.688692 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 09:54:44.688790 master-0 kubenswrapper[8244]: I0318 09:54:44.688731 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 09:54:44.688911 master-0 kubenswrapper[8244]: I0318 09:54:44.688882 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 09:54:44.689344 master-0 kubenswrapper[8244]: I0318 09:54:44.689297 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.690186 master-0 kubenswrapper[8244]: I0318 09:54:44.690140 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:44.690186 master-0 kubenswrapper[8244]: I0318 09:54:44.690184 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:44.690283 master-0 kubenswrapper[8244]: I0318 09:54:44.690197 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:44.690283 master-0 kubenswrapper[8244]: I0318 09:54:44.690255 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:44.690486 master-0 kubenswrapper[8244]: I0318 09:54:44.690455 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 09:54:44.690594 master-0 kubenswrapper[8244]: I0318 09:54:44.690573 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 09:54:44.690931 master-0 kubenswrapper[8244]: I0318 09:54:44.690900 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 09:54:44.693228 master-0 kubenswrapper[8244]: I0318 09:54:44.693121 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 09:54:44.693228 master-0 kubenswrapper[8244]: I0318 09:54:44.693213 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:54:44.694486 master-0 kubenswrapper[8244]: I0318 09:54:44.694455 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:44.695301 master-0 kubenswrapper[8244]: I0318 09:54:44.695272 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.695433 master-0 kubenswrapper[8244]: I0318 09:54:44.695368 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:44.697233 master-0 kubenswrapper[8244]: I0318 09:54:44.697193 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:54:44.697471 master-0 kubenswrapper[8244]: I0318 09:54:44.697438 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 09:54:44.697697 master-0 kubenswrapper[8244]: I0318 09:54:44.697638 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 09:54:44.707937 master-0 kubenswrapper[8244]: I0318 09:54:44.707894 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 09:54:44.708684 master-0 kubenswrapper[8244]: I0318 09:54:44.708663 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 09:54:44.710028 master-0 kubenswrapper[8244]: I0318 09:54:44.710000 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 09:54:44.710449 master-0 kubenswrapper[8244]: I0318 09:54:44.710431 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 09:54:44.710850 master-0 kubenswrapper[8244]: I0318 09:54:44.710834 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 09:54:44.719978 master-0 kubenswrapper[8244]: I0318 09:54:44.719809 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 09:54:44.720326 master-0 kubenswrapper[8244]: I0318 09:54:44.720299 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 09:54:44.720473 master-0 kubenswrapper[8244]: I0318 09:54:44.720432 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 09:54:44.720769 master-0 kubenswrapper[8244]: I0318 09:54:44.720688 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.720769 master-0 kubenswrapper[8244]: I0318 09:54:44.720735 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 09:54:44.720950 master-0 kubenswrapper[8244]: I0318 09:54:44.720926 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 09:54:44.720950 master-0 kubenswrapper[8244]: I0318 09:54:44.720943 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 09:54:44.721591 master-0 kubenswrapper[8244]: I0318 09:54:44.721551 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 09:54:44.721733 master-0 kubenswrapper[8244]: I0318 09:54:44.721698 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 09:54:44.722012 master-0 kubenswrapper[8244]: I0318 09:54:44.721921 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722019 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722290 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722372 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722420 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722457 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722462 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722588 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722599 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722714 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722730 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722859 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722878 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722963 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.722976 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 09:54:44.723151 master-0 kubenswrapper[8244]: I0318 09:54:44.723097 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 09:54:44.724849 master-0 kubenswrapper[8244]: I0318 09:54:44.723128 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 09:54:44.724849 master-0 kubenswrapper[8244]: I0318 09:54:44.724789 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.725006 master-0 kubenswrapper[8244]: I0318 09:54:44.723211 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.725006 master-0 kubenswrapper[8244]: I0318 09:54:44.723215 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 09:54:44.725121 master-0 kubenswrapper[8244]: I0318 09:54:44.725013 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:44.725121 master-0 kubenswrapper[8244]: I0318 09:54:44.723255 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 09:54:44.725121 master-0 kubenswrapper[8244]: I0318 09:54:44.723278 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 09:54:44.725121 master-0 kubenswrapper[8244]: I0318 09:54:44.723337 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.725252 master-0 kubenswrapper[8244]: I0318 09:54:44.723389 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 09:54:44.725252 master-0 kubenswrapper[8244]: I0318 09:54:44.723405 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 09:54:44.725494 master-0 kubenswrapper[8244]: I0318 09:54:44.723433 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 09:54:44.725494 master-0 kubenswrapper[8244]: I0318 09:54:44.723467 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723555 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.725577 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723603 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723611 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723655 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723699 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723704 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723776 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723865 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723929 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.723960 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.724030 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.724205 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 09:54:44.725875 master-0 kubenswrapper[8244]: I0318 09:54:44.724717 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 09:54:44.729142 master-0 kubenswrapper[8244]: I0318 09:54:44.729103 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 09:54:44.732571 master-0 kubenswrapper[8244]: I0318 09:54:44.732532 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.732969 master-0 kubenswrapper[8244]: I0318 09:54:44.732948 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.733025 master-0 kubenswrapper[8244]: I0318 09:54:44.733014 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.733253 master-0 kubenswrapper[8244]: I0318 09:54:44.733231 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.733303 master-0 kubenswrapper[8244]: I0318 09:54:44.733272 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 09:54:44.733354 master-0 kubenswrapper[8244]: I0318 09:54:44.733334 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 09:54:44.733452 master-0 kubenswrapper[8244]: I0318 09:54:44.733431 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 09:54:44.733482 master-0 kubenswrapper[8244]: I0318 09:54:44.733471 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 09:54:44.733766 master-0 kubenswrapper[8244]: I0318 09:54:44.733697 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.734234 master-0 kubenswrapper[8244]: I0318 09:54:44.734204 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 09:54:44.735085 master-0 kubenswrapper[8244]: I0318 09:54:44.734733 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 09:54:44.735313 master-0 kubenswrapper[8244]: I0318 09:54:44.735285 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 09:54:44.736123 master-0 kubenswrapper[8244]: I0318 09:54:44.736059 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.736945 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737071 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737202 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737224 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737252 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737414 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737523 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737594 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737617 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737623 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 09:54:44.737953 master-0 kubenswrapper[8244]: I0318 09:54:44.737687 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 09:54:44.742380 master-0 kubenswrapper[8244]: I0318 09:54:44.741553 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 09:54:44.742599 master-0 kubenswrapper[8244]: I0318 09:54:44.742572 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 09:54:44.742891 master-0 kubenswrapper[8244]: I0318 09:54:44.742869 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 09:54:44.743070 master-0 kubenswrapper[8244]: I0318 09:54:44.743047 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 09:54:44.744509 master-0 kubenswrapper[8244]: I0318 09:54:44.744487 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 09:54:44.745105 master-0 kubenswrapper[8244]: I0318 09:54:44.745082 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 09:54:44.745643 master-0 kubenswrapper[8244]: I0318 09:54:44.745402 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 09:54:44.746036 master-0 kubenswrapper[8244]: I0318 09:54:44.745756 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 09:54:44.746564 master-0 kubenswrapper[8244]: I0318 09:54:44.746545 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 09:54:44.747972 master-0 kubenswrapper[8244]: I0318 09:54:44.747951 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 09:54:44.749333 master-0 kubenswrapper[8244]: I0318 09:54:44.749311 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 09:54:44.768132 master-0 kubenswrapper[8244]: I0318 09:54:44.768108 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 09:54:44.787434 master-0 kubenswrapper[8244]: I0318 09:54:44.785537 8244 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 09:54:44.791093 master-0 kubenswrapper[8244]: I0318 09:54:44.791058 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 09:54:44.807654 master-0 kubenswrapper[8244]: I0318 09:54:44.807616 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 09:54:44.822515 master-0 kubenswrapper[8244]: I0318 09:54:44.822446 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l4b6\" (UniqueName: \"kubernetes.io/projected/025ade16-8502-4b71-a4be-f13dee081e3a-kube-api-access-8l4b6\") pod \"025ade16-8502-4b71-a4be-f13dee081e3a\" (UID: \"025ade16-8502-4b71-a4be-f13dee081e3a\") " Mar 18 09:54:44.822682 master-0 kubenswrapper[8244]: I0318 09:54:44.822645 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-bin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.822721 master-0 kubenswrapper[8244]: I0318 09:54:44.822706 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-daemon-config\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.822938 master-0 kubenswrapper[8244]: I0318 09:54:44.822741 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0999f781-3299-4cb6-ba76-2a4f4584c685-config\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:44.822938 master-0 kubenswrapper[8244]: I0318 09:54:44.822774 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-slash\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.823186 master-0 kubenswrapper[8244]: I0318 09:54:44.823161 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-cabundle\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:44.823319 master-0 kubenswrapper[8244]: I0318 09:54:44.823305 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15f8941b-dba2-40ba-86d5-3318f5b635cc-kube-api-access\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.823428 master-0 kubenswrapper[8244]: I0318 09:54:44.823414 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ee99294-4785-49d0-b493-0d734cf09396-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:44.823500 master-0 kubenswrapper[8244]: I0318 09:54:44.823485 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:44.823589 master-0 kubenswrapper[8244]: I0318 09:54:44.823576 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.823662 master-0 kubenswrapper[8244]: I0318 09:54:44.823631 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0999f781-3299-4cb6-ba76-2a4f4584c685-config\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:44.823742 master-0 kubenswrapper[8244]: I0318 09:54:44.823646 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-system-cni-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.823814 master-0 kubenswrapper[8244]: I0318 09:54:44.823361 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-daemon-config\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.823814 master-0 kubenswrapper[8244]: I0318 09:54:44.823775 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.823917 master-0 kubenswrapper[8244]: I0318 09:54:44.823857 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fn45\" (UniqueName: \"kubernetes.io/projected/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-kube-api-access-4fn45\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.823917 master-0 kubenswrapper[8244]: I0318 09:54:44.823886 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.823976 master-0 kubenswrapper[8244]: I0318 09:54:44.823915 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:44.823976 master-0 kubenswrapper[8244]: I0318 09:54:44.823941 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2chb\" (UniqueName: \"kubernetes.io/projected/8cb5158f-2199-42c0-995a-8490c9ec8a95-kube-api-access-p2chb\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:44.823976 master-0 kubenswrapper[8244]: I0318 09:54:44.823968 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:44.824121 master-0 kubenswrapper[8244]: I0318 09:54:44.824087 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a616d-012a-479e-ab3d-b21295ea1805-config\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:44.824352 master-0 kubenswrapper[8244]: I0318 09:54:44.824329 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:54:44.824401 master-0 kubenswrapper[8244]: I0318 09:54:44.824358 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b46jq\" (UniqueName: \"kubernetes.io/projected/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-kube-api-access-b46jq\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:44.824401 master-0 kubenswrapper[8244]: I0318 09:54:44.824376 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-netd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.824401 master-0 kubenswrapper[8244]: I0318 09:54:44.824395 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fjk8\" (UniqueName: \"kubernetes.io/projected/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-kube-api-access-9fjk8\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:44.824481 master-0 kubenswrapper[8244]: I0318 09:54:44.824410 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cni-binary-copy\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.824481 master-0 kubenswrapper[8244]: I0318 09:54:44.824429 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6bvr\" (UniqueName: \"kubernetes.io/projected/0d72e695-0183-4ee8-8add-5425e67f7138-kube-api-access-g6bvr\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:44.824481 master-0 kubenswrapper[8244]: I0318 09:54:44.824432 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ee99294-4785-49d0-b493-0d734cf09396-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:44.824481 master-0 kubenswrapper[8244]: I0318 09:54:44.824474 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:44.824590 master-0 kubenswrapper[8244]: I0318 09:54:44.824500 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:44.824590 master-0 kubenswrapper[8244]: I0318 09:54:44.824517 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.824590 master-0 kubenswrapper[8244]: I0318 09:54:44.824534 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-env-overrides\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.824590 master-0 kubenswrapper[8244]: I0318 09:54:44.824552 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-socket-dir-parent\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.824590 master-0 kubenswrapper[8244]: I0318 09:54:44.824569 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a6a616d-012a-479e-ab3d-b21295ea1805-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:44.824590 master-0 kubenswrapper[8244]: I0318 09:54:44.824587 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-etc-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.824811 master-0 kubenswrapper[8244]: I0318 09:54:44.824618 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:44.824982 master-0 kubenswrapper[8244]: I0318 09:54:44.824909 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.824982 master-0 kubenswrapper[8244]: I0318 09:54:44.824942 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f076eaf0-b041-4db0-ba06-3d85e23bb654-serving-cert\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:44.825066 master-0 kubenswrapper[8244]: I0318 09:54:44.825031 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:54:44.825097 master-0 kubenswrapper[8244]: I0318 09:54:44.825064 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-config\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825245 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-config\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825317 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv8x5\" (UniqueName: \"kubernetes.io/projected/932a70df-3afe-4873-9449-ab6e061d3fe3-kube-api-access-fv8x5\") pod \"csi-snapshot-controller-64854d9cff-2l6cq\" (UID: \"932a70df-3afe-4873-9449-ab6e061d3fe3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825386 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f076eaf0-b041-4db0-ba06-3d85e23bb654-serving-cert\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825390 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825444 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-etcd-client\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825465 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825467 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825324 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-env-overrides\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825493 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825508 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825526 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovn-node-metrics-cert\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825545 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-bound-sa-token\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825563 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-config\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825581 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825600 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb35841e-d992-4044-aaaa-06c9faf47bd0-serving-cert\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825618 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5dk8\" (UniqueName: \"kubernetes.io/projected/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-kube-api-access-p5dk8\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825637 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx9p2\" (UniqueName: \"kubernetes.io/projected/db52ca42-e458-407f-9eeb-bf6de6405edc-kube-api-access-jx9p2\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825655 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825672 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0605021-862d-424a-a4c1-037fb005b77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825678 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-etcd-client\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825692 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825708 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/d4d2218c-f9df-4d43-8727-ed3a920e23f7-kube-api-access-w4qp9\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825727 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825725 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825748 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25k9g\" (UniqueName: \"kubernetes.io/projected/ee376320-9ca0-444d-ab37-9cbcb6729b11-kube-api-access-25k9g\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825775 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15f8941b-dba2-40ba-86d5-3318f5b635cc-service-ca\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825796 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7qd\" (UniqueName: \"kubernetes.io/projected/f69a00b6-d908-4485-bb0d-57594fc01d24-kube-api-access-5r7qd\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825814 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-multus\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825880 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-config\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825882 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-key\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825969 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwfph\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-kube-api-access-nwfph\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.825991 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.826008 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-config\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.826028 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.826045 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-system-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.826070 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-env-overrides\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.826085 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.826098 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0605021-862d-424a-a4c1-037fb005b77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.826096 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15f8941b-dba2-40ba-86d5-3318f5b635cc-service-ca\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.826091 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:44.826231 master-0 kubenswrapper[8244]: I0318 09:54:44.826215 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb35841e-d992-4044-aaaa-06c9faf47bd0-serving-cert\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826327 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-env-overrides\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826406 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hww8g\" (UniqueName: \"kubernetes.io/projected/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae-kube-api-access-hww8g\") pod \"migrator-8487694857-8tqwj\" (UID: \"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826434 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkzq9\" (UniqueName: \"kubernetes.io/projected/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-kube-api-access-dkzq9\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826442 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826454 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb7tz\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-kube-api-access-tb7tz\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826498 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826525 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826608 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ktpl\" (UniqueName: \"kubernetes.io/projected/bb942756-bac7-414d-b179-cebdce588a13-kube-api-access-2ktpl\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826632 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826649 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826687 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-netns\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826711 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcj8f\" (UniqueName: \"kubernetes.io/projected/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-kube-api-access-hcj8f\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826735 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826767 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d72e695-0183-4ee8-8add-5425e67f7138-config\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826790 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-operand-assets\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826807 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-os-release\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826840 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-netns\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826861 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826882 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-log-socket\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826902 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4hfd\" (UniqueName: \"kubernetes.io/projected/c2635254-a491-42e5-b598-461c24bf77ca-kube-api-access-p4hfd\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826919 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826938 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/accc57fb-75f5-4f89-9804-6ede7f77e27c-trusted-ca\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.826988 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-operand-assets\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.827005 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-config\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.827068 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8sl\" (UniqueName: \"kubernetes.io/projected/91331360-dc70-45bb-a815-e00664bae6c4-kube-api-access-8w8sl\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.827112 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmffc\" (UniqueName: \"kubernetes.io/projected/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-kube-api-access-gmffc\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.827171 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62b82d72-d73c-451a-84e1-551d73036aa8-host-slash\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.827220 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghd2r\" (UniqueName: \"kubernetes.io/projected/9ccdc221-4ec5-487e-8ec4-85284ed628d8-kube-api-access-ghd2r\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.827178 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/accc57fb-75f5-4f89-9804-6ede7f77e27c-trusted-ca\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.827274 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f69a00b6-d908-4485-bb0d-57594fc01d24-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:44.827280 master-0 kubenswrapper[8244]: I0318 09:54:44.827276 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d72e695-0183-4ee8-8add-5425e67f7138-config\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827367 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-os-release\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827468 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-conf-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827493 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shbrj\" (UniqueName: \"kubernetes.io/projected/6f266bad-8b30-4300-ad93-9d48e61f2440-kube-api-access-shbrj\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827537 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827555 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-cnibin\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827589 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-script-lib\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827609 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827629 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-k8s-cni-cncf-io\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827646 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827664 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827691 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827710 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/a078565a-6970-4f42-84f4-938f1d637245-kube-api-access-cxv6v\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827725 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhzg4\" (UniqueName: \"kubernetes.io/projected/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-kube-api-access-lhzg4\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827742 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-systemd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827758 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cnibin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827775 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0999f781-3299-4cb6-ba76-2a4f4584c685-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.827979 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.828060 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.828105 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-config\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.828146 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:44.828199 master-0 kubenswrapper[8244]: I0318 09:54:44.828146 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-bin\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828214 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0999f781-3299-4cb6-ba76-2a4f4584c685-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828260 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f69a00b6-d908-4485-bb0d-57594fc01d24-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828284 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9sq\" (UniqueName: \"kubernetes.io/projected/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-kube-api-access-wj9sq\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828287 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-config\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828419 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828434 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0999f781-3299-4cb6-ba76-2a4f4584c685-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828511 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlxfz\" (UniqueName: \"kubernetes.io/projected/bb35841e-d992-4044-aaaa-06c9faf47bd0-kube-api-access-zlxfz\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828537 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828579 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2635254-a491-42e5-b598-461c24bf77ca-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828705 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-node-log\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828747 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-serving-cert\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828776 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828847 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828871 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-hostroot\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828890 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-multus-certs\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828916 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828937 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d72e695-0183-4ee8-8add-5425e67f7138-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828956 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828973 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-kubelet\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828990 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-systemd-units\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829010 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f25pg\" (UniqueName: \"kubernetes.io/projected/f076eaf0-b041-4db0-ba06-3d85e23bb654-kube-api-access-f25pg\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829027 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-serving-cert\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829062 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x6ht\" (UniqueName: \"kubernetes.io/projected/0442ec6c-5973-40a5-a0c3-dc02de46d343-kube-api-access-5x6ht\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829085 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829090 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829108 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxj5c\" (UniqueName: \"kubernetes.io/projected/d0605021-862d-424a-a4c1-037fb005b77e-kube-api-access-cxj5c\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829146 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s54f9\" (UniqueName: \"kubernetes.io/projected/8e812dd9-cd05-4e9e-8710-d0920181ece2-kube-api-access-s54f9\") pod \"csi-snapshot-controller-operator-5f5d689c6b-mqbmq\" (UID: \"8e812dd9-cd05-4e9e-8710-d0920181ece2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829267 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829291 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a616d-012a-479e-ab3d-b21295ea1805-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829305 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d72e695-0183-4ee8-8add-5425e67f7138-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829318 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829324 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829312 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829351 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-var-lib-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829393 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829424 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccdc221-4ec5-487e-8ec4-85284ed628d8-metrics-tls\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829436 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a616d-012a-479e-ab3d-b21295ea1805-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829467 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-config\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.828987 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2635254-a491-42e5-b598-461c24bf77ca-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829511 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829516 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-kubelet\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829545 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-etc-kubernetes\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829567 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-serving-cert\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829591 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829686 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829699 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-serving-cert\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829727 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829746 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:44.829950 master-0 kubenswrapper[8244]: I0318 09:54:44.829765 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-ovn\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.831459 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb35841e-d992-4044-aaaa-06c9faf47bd0-config\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.831514 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvnrf\" (UniqueName: \"kubernetes.io/projected/62b82d72-d73c-451a-84e1-551d73036aa8-kube-api-access-lvnrf\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.831555 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9ccdc221-4ec5-487e-8ec4-85284ed628d8-host-etc-kube\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.831624 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cni-binary-copy\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.831739 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccdc221-4ec5-487e-8ec4-85284ed628d8-metrics-tls\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.831775 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a616d-012a-479e-ab3d-b21295ea1805-config\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.831985 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.832004 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.832062 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.832175 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb35841e-d992-4044-aaaa-06c9faf47bd0-config\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.832198 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.832218 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025ade16-8502-4b71-a4be-f13dee081e3a-kube-api-access-8l4b6" (OuterVolumeSpecName: "kube-api-access-8l4b6") pod "025ade16-8502-4b71-a4be-f13dee081e3a" (UID: "025ade16-8502-4b71-a4be-f13dee081e3a"). InnerVolumeSpecName "kube-api-access-8l4b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:54:44.834614 master-0 kubenswrapper[8244]: I0318 09:54:44.832561 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:44.837254 master-0 kubenswrapper[8244]: I0318 09:54:44.836658 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovn-node-metrics-cert\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.849324 master-0 kubenswrapper[8244]: I0318 09:54:44.849212 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 09:54:44.850806 master-0 kubenswrapper[8244]: I0318 09:54:44.850743 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-zvdvg" Mar 18 09:54:44.859664 master-0 kubenswrapper[8244]: I0318 09:54:44.859619 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-script-lib\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.868395 master-0 kubenswrapper[8244]: I0318 09:54:44.868343 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:54:44.888014 master-0 kubenswrapper[8244]: I0318 09:54:44.887981 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:54:44.908156 master-0 kubenswrapper[8244]: I0318 09:54:44.908116 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:54:44.929009 master-0 kubenswrapper[8244]: I0318 09:54:44.928637 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:54:44.932772 master-0 kubenswrapper[8244]: I0318 09:54:44.932739 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b46jq\" (UniqueName: \"kubernetes.io/projected/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-kube-api-access-b46jq\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.932786 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-netd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.932872 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.932891 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-etc-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.932910 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-socket-dir-parent\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.932936 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933437 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933486 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933543 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933613 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933651 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-multus\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933674 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-key\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933701 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-system-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933733 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933765 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933809 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933887 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-os-release\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933918 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-netns\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933940 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-log-socket\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933964 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.933990 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-netns\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934030 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934070 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62b82d72-d73c-451a-84e1-551d73036aa8-host-slash\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934108 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-cnibin\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934149 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-os-release\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934177 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-conf-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934219 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-systemd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934244 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-k8s-cni-cncf-io\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934270 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934308 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cnibin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934334 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-bin\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934359 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934394 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934434 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-node-log\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934461 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934485 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-hostroot\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934508 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-multus-certs\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934528 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934548 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-systemd-units\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934567 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-kubelet\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934608 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934630 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-var-lib-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934649 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934669 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-ovn\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934685 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-config\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934703 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-kubelet\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934718 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-etc-kubernetes\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934739 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9ccdc221-4ec5-487e-8ec4-85284ed628d8-host-etc-kube\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934755 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-slash\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934772 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-bin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934788 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-cabundle\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934809 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934844 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fn45\" (UniqueName: \"kubernetes.io/projected/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-kube-api-access-4fn45\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934865 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934890 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-system-cni-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934911 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934935 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: I0318 09:54:44.934983 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l4b6\" (UniqueName: \"kubernetes.io/projected/025ade16-8502-4b71-a4be-f13dee081e3a-kube-api-access-8l4b6\") on node \"master-0\" DevicePath \"\"" Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: E0318 09:54:44.933109 8244 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:44.935042 master-0 kubenswrapper[8244]: E0318 09:54:44.935117 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.435095978 +0000 UTC m=+1.914832106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.935347 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.933241 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-netd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.935396 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.933266 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-etc-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.935449 8244 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.935471 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.435463557 +0000 UTC m=+1.915199685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.933325 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-socket-dir-parent\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.935515 8244 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.935534 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.435527678 +0000 UTC m=+1.915263806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.935554 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-multus\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.935637 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-system-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.935674 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.935718 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.935740 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.435734193 +0000 UTC m=+1.915470311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.935778 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.935796 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.435789505 +0000 UTC m=+1.915525633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.935875 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-os-release\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.935910 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-netns\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.935937 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-log-socket\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.935982 8244 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936006 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.43600018 +0000 UTC m=+1.915736308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : configmap "client-ca" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936026 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-netns\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936081 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936107 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62b82d72-d73c-451a-84e1-551d73036aa8-host-slash\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936130 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-cnibin\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936166 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-os-release\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936191 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-conf-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936214 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-systemd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936242 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-k8s-cni-cncf-io\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936286 8244 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936312 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.436299627 +0000 UTC m=+1.916035755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936349 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cnibin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936384 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-bin\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936420 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936465 8244 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936487 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.436481041 +0000 UTC m=+1.916217169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936509 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-node-log\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936547 8244 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936564 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.436558873 +0000 UTC m=+1.916295001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936583 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-hostroot\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936606 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-multus-certs\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936646 8244 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936669 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.436660645 +0000 UTC m=+1.916396773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936719 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-systemd-units\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936745 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-kubelet\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936785 8244 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936806 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.436797129 +0000 UTC m=+1.916533257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : secret "metrics-daemon-secret" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936843 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-var-lib-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936882 8244 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: E0318 09:54:44.936900 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.436894351 +0000 UTC m=+1.916630479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:44.937682 master-0 kubenswrapper[8244]: I0318 09:54:44.936921 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-ovn\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: I0318 09:54:44.938306 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-system-cni-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: E0318 09:54:44.938382 8244 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: E0318 09:54:44.938410 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.438399097 +0000 UTC m=+1.918135225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : secret "serving-cert" not found Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: E0318 09:54:44.933387 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: I0318 09:54:44.938495 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: I0318 09:54:44.938497 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-slash\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: I0318 09:54:44.938527 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: E0318 09:54:44.938575 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.438533741 +0000 UTC m=+1.918270029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: E0318 09:54:44.935043 8244 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: E0318 09:54:44.938635 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:45.438623653 +0000 UTC m=+1.918359871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: I0318 09:54:44.938616 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-bin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: I0318 09:54:44.938679 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-etc-kubernetes\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: I0318 09:54:44.938764 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9ccdc221-4ec5-487e-8ec4-85284ed628d8-host-etc-kube\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: I0318 09:54:44.939179 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-config\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:44.939658 master-0 kubenswrapper[8244]: I0318 09:54:44.939266 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-kubelet\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:44.949179 master-0 kubenswrapper[8244]: I0318 09:54:44.949145 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 09:54:44.959029 master-0 kubenswrapper[8244]: I0318 09:54:44.958758 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-cabundle\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:44.968529 master-0 kubenswrapper[8244]: I0318 09:54:44.968502 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 09:54:44.988138 master-0 kubenswrapper[8244]: I0318 09:54:44.988030 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 09:54:44.996577 master-0 kubenswrapper[8244]: I0318 09:54:44.996330 8244 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 09:54:44.999596 master-0 kubenswrapper[8244]: I0318 09:54:44.999536 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-key\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:45.013365 master-0 kubenswrapper[8244]: I0318 09:54:45.013317 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 09:54:45.027812 master-0 kubenswrapper[8244]: I0318 09:54:45.027758 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:54:45.081626 master-0 kubenswrapper[8244]: I0318 09:54:45.081582 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15f8941b-dba2-40ba-86d5-3318f5b635cc-kube-api-access\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:45.101428 master-0 kubenswrapper[8244]: I0318 09:54:45.101380 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:45.121750 master-0 kubenswrapper[8244]: I0318 09:54:45.120067 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fjk8\" (UniqueName: \"kubernetes.io/projected/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-kube-api-access-9fjk8\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:45.139742 master-0 kubenswrapper[8244]: I0318 09:54:45.139688 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2chb\" (UniqueName: \"kubernetes.io/projected/8cb5158f-2199-42c0-995a-8490c9ec8a95-kube-api-access-p2chb\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:45.158334 master-0 kubenswrapper[8244]: I0318 09:54:45.158283 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a6a616d-012a-479e-ab3d-b21295ea1805-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 09:54:45.179776 master-0 kubenswrapper[8244]: I0318 09:54:45.179612 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv8x5\" (UniqueName: \"kubernetes.io/projected/932a70df-3afe-4873-9449-ab6e061d3fe3-kube-api-access-fv8x5\") pod \"csi-snapshot-controller-64854d9cff-2l6cq\" (UID: \"932a70df-3afe-4873-9449-ab6e061d3fe3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" Mar 18 09:54:45.201919 master-0 kubenswrapper[8244]: I0318 09:54:45.201867 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:45.220432 master-0 kubenswrapper[8244]: I0318 09:54:45.220367 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-bound-sa-token\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:45.243108 master-0 kubenswrapper[8244]: I0318 09:54:45.243033 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5dk8\" (UniqueName: \"kubernetes.io/projected/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-kube-api-access-p5dk8\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 09:54:45.261903 master-0 kubenswrapper[8244]: I0318 09:54:45.261815 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx9p2\" (UniqueName: \"kubernetes.io/projected/db52ca42-e458-407f-9eeb-bf6de6405edc-kube-api-access-jx9p2\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:45.282807 master-0 kubenswrapper[8244]: I0318 09:54:45.282747 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/d4d2218c-f9df-4d43-8727-ed3a920e23f7-kube-api-access-w4qp9\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:45.298423 master-0 kubenswrapper[8244]: I0318 09:54:45.298388 8244 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 09:54:45.312902 master-0 kubenswrapper[8244]: I0318 09:54:45.312794 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwfph\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-kube-api-access-nwfph\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:45.340784 master-0 kubenswrapper[8244]: I0318 09:54:45.339240 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 09:54:45.345125 master-0 kubenswrapper[8244]: I0318 09:54:45.345059 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7qd\" (UniqueName: \"kubernetes.io/projected/f69a00b6-d908-4485-bb0d-57594fc01d24-kube-api-access-5r7qd\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:45.379316 master-0 kubenswrapper[8244]: I0318 09:54:45.379266 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25k9g\" (UniqueName: \"kubernetes.io/projected/ee376320-9ca0-444d-ab37-9cbcb6729b11-kube-api-access-25k9g\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:45.394571 master-0 kubenswrapper[8244]: I0318 09:54:45.394521 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hww8g\" (UniqueName: \"kubernetes.io/projected/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae-kube-api-access-hww8g\") pod \"migrator-8487694857-8tqwj\" (UID: \"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" Mar 18 09:54:45.424595 master-0 kubenswrapper[8244]: I0318 09:54:45.424543 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkzq9\" (UniqueName: \"kubernetes.io/projected/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-kube-api-access-dkzq9\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:45.441599 master-0 kubenswrapper[8244]: I0318 09:54:45.441491 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:45.441599 master-0 kubenswrapper[8244]: I0318 09:54:45.441553 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:45.441814 master-0 kubenswrapper[8244]: E0318 09:54:45.441676 8244 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:45.441814 master-0 kubenswrapper[8244]: I0318 09:54:45.441668 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:45.441814 master-0 kubenswrapper[8244]: E0318 09:54:45.441738 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.441719925 +0000 UTC m=+2.921456053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:45.441814 master-0 kubenswrapper[8244]: I0318 09:54:45.441754 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:45.441814 master-0 kubenswrapper[8244]: I0318 09:54:45.441781 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:45.441814 master-0 kubenswrapper[8244]: I0318 09:54:45.441816 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:45.442025 master-0 kubenswrapper[8244]: E0318 09:54:45.441880 8244 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:54:45.442025 master-0 kubenswrapper[8244]: E0318 09:54:45.441902 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.441895309 +0000 UTC m=+2.921631437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:54:45.442025 master-0 kubenswrapper[8244]: E0318 09:54:45.441923 8244 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:45.442025 master-0 kubenswrapper[8244]: E0318 09:54:45.441978 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:45.442170 master-0 kubenswrapper[8244]: E0318 09:54:45.442089 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:45.442170 master-0 kubenswrapper[8244]: I0318 09:54:45.441988 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:45.442170 master-0 kubenswrapper[8244]: E0318 09:54:45.442001 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.441984761 +0000 UTC m=+2.921720889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:45.442170 master-0 kubenswrapper[8244]: E0318 09:54:45.441918 8244 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:45.442170 master-0 kubenswrapper[8244]: I0318 09:54:45.442142 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:45.442170 master-0 kubenswrapper[8244]: E0318 09:54:45.442159 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.442151536 +0000 UTC m=+2.921887654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:45.442170 master-0 kubenswrapper[8244]: E0318 09:54:45.442025 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:45.442397 master-0 kubenswrapper[8244]: E0318 09:54:45.442179 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.442170666 +0000 UTC m=+2.921906794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:45.442397 master-0 kubenswrapper[8244]: E0318 09:54:45.442193 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.442184776 +0000 UTC m=+2.921920904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:45.442397 master-0 kubenswrapper[8244]: E0318 09:54:45.442206 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.442201077 +0000 UTC m=+2.921937195 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:45.442397 master-0 kubenswrapper[8244]: E0318 09:54:45.442222 8244 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:45.442397 master-0 kubenswrapper[8244]: E0318 09:54:45.442287 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.442256378 +0000 UTC m=+2.921992696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : configmap "client-ca" not found Mar 18 09:54:45.442557 master-0 kubenswrapper[8244]: I0318 09:54:45.442451 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:45.442557 master-0 kubenswrapper[8244]: I0318 09:54:45.442531 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:45.442623 master-0 kubenswrapper[8244]: I0318 09:54:45.442581 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:45.442623 master-0 kubenswrapper[8244]: I0318 09:54:45.442608 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:45.442705 master-0 kubenswrapper[8244]: I0318 09:54:45.442676 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:45.442753 master-0 kubenswrapper[8244]: I0318 09:54:45.442716 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:45.442808 master-0 kubenswrapper[8244]: I0318 09:54:45.442781 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:45.443012 master-0 kubenswrapper[8244]: E0318 09:54:45.442983 8244 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:54:45.443054 master-0 kubenswrapper[8244]: E0318 09:54:45.443028 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.443018176 +0000 UTC m=+2.922754484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : secret "serving-cert" not found Mar 18 09:54:45.443111 master-0 kubenswrapper[8244]: E0318 09:54:45.443089 8244 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:45.443153 master-0 kubenswrapper[8244]: E0318 09:54:45.443122 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.443113859 +0000 UTC m=+2.922849987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:45.443203 master-0 kubenswrapper[8244]: E0318 09:54:45.443180 8244 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:45.443249 master-0 kubenswrapper[8244]: E0318 09:54:45.443216 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.443207031 +0000 UTC m=+2.922943349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:45.443286 master-0 kubenswrapper[8244]: E0318 09:54:45.443267 8244 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:45.443318 master-0 kubenswrapper[8244]: E0318 09:54:45.443292 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.443285193 +0000 UTC m=+2.923021521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:45.443356 master-0 kubenswrapper[8244]: E0318 09:54:45.443336 8244 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:45.443389 master-0 kubenswrapper[8244]: E0318 09:54:45.443359 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.443352654 +0000 UTC m=+2.923088782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:45.443425 master-0 kubenswrapper[8244]: E0318 09:54:45.443408 8244 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 09:54:45.443457 master-0 kubenswrapper[8244]: E0318 09:54:45.443433 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.443423366 +0000 UTC m=+2.923159694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : secret "metrics-daemon-secret" not found Mar 18 09:54:45.443510 master-0 kubenswrapper[8244]: E0318 09:54:45.443488 8244 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:45.443549 master-0 kubenswrapper[8244]: E0318 09:54:45.443517 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:46.443510598 +0000 UTC m=+2.923246916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:45.443756 master-0 kubenswrapper[8244]: I0318 09:54:45.443734 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ktpl\" (UniqueName: \"kubernetes.io/projected/bb942756-bac7-414d-b179-cebdce588a13-kube-api-access-2ktpl\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 09:54:45.453576 master-0 kubenswrapper[8244]: I0318 09:54:45.453515 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb7tz\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-kube-api-access-tb7tz\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:45.465010 master-0 kubenswrapper[8244]: I0318 09:54:45.464935 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcj8f\" (UniqueName: \"kubernetes.io/projected/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-kube-api-access-hcj8f\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 09:54:45.483435 master-0 kubenswrapper[8244]: I0318 09:54:45.483375 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4hfd\" (UniqueName: \"kubernetes.io/projected/c2635254-a491-42e5-b598-461c24bf77ca-kube-api-access-p4hfd\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:45.500938 master-0 kubenswrapper[8244]: I0318 09:54:45.500885 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmffc\" (UniqueName: \"kubernetes.io/projected/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-kube-api-access-gmffc\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:45.528674 master-0 kubenswrapper[8244]: I0318 09:54:45.528616 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghd2r\" (UniqueName: \"kubernetes.io/projected/9ccdc221-4ec5-487e-8ec4-85284ed628d8-kube-api-access-ghd2r\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 09:54:45.543647 master-0 kubenswrapper[8244]: I0318 09:54:45.543589 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8sl\" (UniqueName: \"kubernetes.io/projected/91331360-dc70-45bb-a815-e00664bae6c4-kube-api-access-8w8sl\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 09:54:45.558023 master-0 kubenswrapper[8244]: I0318 09:54:45.557966 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shbrj\" (UniqueName: \"kubernetes.io/projected/6f266bad-8b30-4300-ad93-9d48e61f2440-kube-api-access-shbrj\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:45.582176 master-0 kubenswrapper[8244]: I0318 09:54:45.582106 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0999f781-3299-4cb6-ba76-2a4f4584c685-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 09:54:45.604633 master-0 kubenswrapper[8244]: I0318 09:54:45.604585 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/a078565a-6970-4f42-84f4-938f1d637245-kube-api-access-cxv6v\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 09:54:45.619458 master-0 kubenswrapper[8244]: I0318 09:54:45.619408 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhzg4\" (UniqueName: \"kubernetes.io/projected/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-kube-api-access-lhzg4\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 09:54:45.638141 master-0 kubenswrapper[8244]: I0318 09:54:45.638094 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9sq\" (UniqueName: \"kubernetes.io/projected/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-kube-api-access-wj9sq\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 09:54:45.659696 master-0 kubenswrapper[8244]: I0318 09:54:45.659639 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlxfz\" (UniqueName: \"kubernetes.io/projected/bb35841e-d992-4044-aaaa-06c9faf47bd0-kube-api-access-zlxfz\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 09:54:45.680140 master-0 kubenswrapper[8244]: I0318 09:54:45.680080 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f25pg\" (UniqueName: \"kubernetes.io/projected/f076eaf0-b041-4db0-ba06-3d85e23bb654-kube-api-access-f25pg\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:54:45.704910 master-0 kubenswrapper[8244]: I0318 09:54:45.704867 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxj5c\" (UniqueName: \"kubernetes.io/projected/d0605021-862d-424a-a4c1-037fb005b77e-kube-api-access-cxj5c\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 09:54:45.726616 master-0 kubenswrapper[8244]: I0318 09:54:45.726560 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x6ht\" (UniqueName: \"kubernetes.io/projected/0442ec6c-5973-40a5-a0c3-dc02de46d343-kube-api-access-5x6ht\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:45.745678 master-0 kubenswrapper[8244]: I0318 09:54:45.745624 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s54f9\" (UniqueName: \"kubernetes.io/projected/8e812dd9-cd05-4e9e-8710-d0920181ece2-kube-api-access-s54f9\") pod \"csi-snapshot-controller-operator-5f5d689c6b-mqbmq\" (UID: \"8e812dd9-cd05-4e9e-8710-d0920181ece2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" Mar 18 09:54:45.764644 master-0 kubenswrapper[8244]: I0318 09:54:45.764604 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6bvr\" (UniqueName: \"kubernetes.io/projected/0d72e695-0183-4ee8-8add-5425e67f7138-kube-api-access-g6bvr\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 09:54:45.780646 master-0 kubenswrapper[8244]: I0318 09:54:45.780589 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvnrf\" (UniqueName: \"kubernetes.io/projected/62b82d72-d73c-451a-84e1-551d73036aa8-kube-api-access-lvnrf\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 09:54:45.802385 master-0 kubenswrapper[8244]: E0318 09:54:45.801448 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:54:45.815203 master-0 kubenswrapper[8244]: E0318 09:54:45.815165 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:45.837132 master-0 kubenswrapper[8244]: E0318 09:54:45.837091 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:54:45.853921 master-0 kubenswrapper[8244]: I0318 09:54:45.853882 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerStarted","Data":"fe475c93acb3e152a06334aa122f61bc3dfe0a7c617c3c6b5b5bc407433dfd76"} Mar 18 09:54:45.855194 master-0 kubenswrapper[8244]: E0318 09:54:45.855168 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:54:45.866271 master-0 kubenswrapper[8244]: I0318 09:54:45.866227 8244 request.go:700] Waited for 1.014228318s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods Mar 18 09:54:45.876801 master-0 kubenswrapper[8244]: E0318 09:54:45.876761 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:45.901624 master-0 kubenswrapper[8244]: I0318 09:54:45.901591 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b46jq\" (UniqueName: \"kubernetes.io/projected/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-kube-api-access-b46jq\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:45.926960 master-0 kubenswrapper[8244]: I0318 09:54:45.926907 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fn45\" (UniqueName: \"kubernetes.io/projected/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-kube-api-access-4fn45\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:45.969172 master-0 kubenswrapper[8244]: I0318 09:54:45.969017 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 09:54:46.132848 master-0 kubenswrapper[8244]: I0318 09:54:46.132459 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-jjcsv"] Mar 18 09:54:46.143050 master-0 kubenswrapper[8244]: W0318 09:54:46.143014 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ea90fee_5b5e_4b59_bfc4_969ee8c7912e.slice/crio-7f312c72332d1eca8944cf91ca9c1d896c13f62ea944da320c89182c0dd4ab06 WatchSource:0}: Error finding container 7f312c72332d1eca8944cf91ca9c1d896c13f62ea944da320c89182c0dd4ab06: Status 404 returned error can't find the container with id 7f312c72332d1eca8944cf91ca9c1d896c13f62ea944da320c89182c0dd4ab06 Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.457779 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458079 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458108 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458129 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458146 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458166 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458184 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458201 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458219 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458236 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458251 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458271 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458291 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458309 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: I0318 09:54:46.458325 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.457955 8244 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458477 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.458462953 +0000 UTC m=+4.938199081 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : secret "metrics-daemon-secret" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458770 8244 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458793 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.458786021 +0000 UTC m=+4.938522149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458844 8244 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458863 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.458856712 +0000 UTC m=+4.938592840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : secret "serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458432 8244 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458895 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.458889953 +0000 UTC m=+4.938626081 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458915 8244 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458986 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.458967745 +0000 UTC m=+4.938703873 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.458986 8244 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459011 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.459003986 +0000 UTC m=+4.938740114 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459014 8244 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459035 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459048 8244 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459055 8244 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459085 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459057 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.459051467 +0000 UTC m=+4.938787585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459113 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459014 8244 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459115 8244 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459129 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.459108878 +0000 UTC m=+4.938845056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459153 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.459143959 +0000 UTC m=+4.938880197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459171 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.45916363 +0000 UTC m=+4.938899868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459184 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.45917709 +0000 UTC m=+4.938913318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459199 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.45919067 +0000 UTC m=+4.938926888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459212 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.459205101 +0000 UTC m=+4.938941319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459227 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.459219981 +0000 UTC m=+4.938956199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : configmap "client-ca" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459087 8244 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:46.459774 master-0 kubenswrapper[8244]: E0318 09:54:46.459277 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.459268412 +0000 UTC m=+4.939004630 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:46.581849 master-0 kubenswrapper[8244]: I0318 09:54:46.581177 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:46.872040 master-0 kubenswrapper[8244]: I0318 09:54:46.871898 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" event={"ID":"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e","Type":"ContainerStarted","Data":"ba2a4b371f548813e64e9936bac5f8a30427b5b6c9ba22e587be7235d007fdc6"} Mar 18 09:54:46.872040 master-0 kubenswrapper[8244]: I0318 09:54:46.871947 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" event={"ID":"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e","Type":"ContainerStarted","Data":"7f312c72332d1eca8944cf91ca9c1d896c13f62ea944da320c89182c0dd4ab06"} Mar 18 09:54:46.874481 master-0 kubenswrapper[8244]: I0318 09:54:46.874355 8244 generic.go:334] "Generic (PLEG): container finished" podID="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" containerID="c8f206ca8c94fc19bfa804f2e3458858b441e4df0a8873ee86942ce37a6e1dff" exitCode=0 Mar 18 09:54:46.875125 master-0 kubenswrapper[8244]: I0318 09:54:46.875070 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" event={"ID":"d26036f1-bdce-4ec5-873f-962fa7e8e6c1","Type":"ContainerDied","Data":"c8f206ca8c94fc19bfa804f2e3458858b441e4df0a8873ee86942ce37a6e1dff"} Mar 18 09:54:47.331484 master-0 kubenswrapper[8244]: I0318 09:54:47.331419 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:47.758105 master-0 kubenswrapper[8244]: I0318 09:54:47.757956 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:47.780290 master-0 kubenswrapper[8244]: I0318 09:54:47.779977 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c98c8d65f-kh9fk"] Mar 18 09:54:47.780290 master-0 kubenswrapper[8244]: E0318 09:54:47.780161 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3796179a-f6c1-4f97-a2e1-d32106a5d8e9" containerName="prober" Mar 18 09:54:47.780290 master-0 kubenswrapper[8244]: I0318 09:54:47.780173 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="3796179a-f6c1-4f97-a2e1-d32106a5d8e9" containerName="prober" Mar 18 09:54:47.780290 master-0 kubenswrapper[8244]: E0318 09:54:47.780182 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerName="assisted-installer-controller" Mar 18 09:54:47.780290 master-0 kubenswrapper[8244]: I0318 09:54:47.780191 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerName="assisted-installer-controller" Mar 18 09:54:47.780290 master-0 kubenswrapper[8244]: I0318 09:54:47.780254 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerName="assisted-installer-controller" Mar 18 09:54:47.780290 master-0 kubenswrapper[8244]: I0318 09:54:47.780267 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="3796179a-f6c1-4f97-a2e1-d32106a5d8e9" containerName="prober" Mar 18 09:54:47.781923 master-0 kubenswrapper[8244]: I0318 09:54:47.781887 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:47.782358 master-0 kubenswrapper[8244]: I0318 09:54:47.782326 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-zvdvg"] Mar 18 09:54:47.783526 master-0 kubenswrapper[8244]: I0318 09:54:47.783489 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:54:47.785261 master-0 kubenswrapper[8244]: I0318 09:54:47.785102 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:54:47.785430 master-0 kubenswrapper[8244]: I0318 09:54:47.785408 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:54:47.785609 master-0 kubenswrapper[8244]: I0318 09:54:47.785588 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:54:47.785848 master-0 kubenswrapper[8244]: I0318 09:54:47.785729 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:54:47.790692 master-0 kubenswrapper[8244]: I0318 09:54:47.790657 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:54:47.792523 master-0 kubenswrapper[8244]: I0318 09:54:47.792103 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-zvdvg"] Mar 18 09:54:47.792523 master-0 kubenswrapper[8244]: I0318 09:54:47.792182 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c98c8d65f-kh9fk"] Mar 18 09:54:47.802027 master-0 kubenswrapper[8244]: I0318 09:54:47.800599 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:47.829028 master-0 kubenswrapper[8244]: I0318 09:54:47.828900 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" podStartSLOduration=3.8288788499999997 podStartE2EDuration="3.82887885s" podCreationTimestamp="2026-03-18 09:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:54:47.828553552 +0000 UTC m=+4.308289680" watchObservedRunningTime="2026-03-18 09:54:47.82887885 +0000 UTC m=+4.308614978" Mar 18 09:54:47.878960 master-0 kubenswrapper[8244]: I0318 09:54:47.878929 8244 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:54:47.879754 master-0 kubenswrapper[8244]: I0318 09:54:47.879740 8244 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:54:47.901072 master-0 kubenswrapper[8244]: I0318 09:54:47.901001 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6gsl\" (UniqueName: \"kubernetes.io/projected/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-kube-api-access-p6gsl\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:47.901221 master-0 kubenswrapper[8244]: I0318 09:54:47.901082 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-config\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:47.901221 master-0 kubenswrapper[8244]: I0318 09:54:47.901143 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-proxy-ca-bundles\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:47.901221 master-0 kubenswrapper[8244]: I0318 09:54:47.901159 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:47.901365 master-0 kubenswrapper[8244]: I0318 09:54:47.901238 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.003602 master-0 kubenswrapper[8244]: I0318 09:54:48.003263 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6gsl\" (UniqueName: \"kubernetes.io/projected/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-kube-api-access-p6gsl\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.003602 master-0 kubenswrapper[8244]: I0318 09:54:48.003320 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-config\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.003602 master-0 kubenswrapper[8244]: I0318 09:54:48.003369 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-proxy-ca-bundles\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.003602 master-0 kubenswrapper[8244]: I0318 09:54:48.003390 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.003602 master-0 kubenswrapper[8244]: I0318 09:54:48.003476 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.005629 master-0 kubenswrapper[8244]: E0318 09:54:48.005035 8244 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:54:48.005629 master-0 kubenswrapper[8244]: I0318 09:54:48.005049 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-config\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.006169 master-0 kubenswrapper[8244]: E0318 09:54:48.006086 8244 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:48.008102 master-0 kubenswrapper[8244]: E0318 09:54:48.008045 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert podName:de85e63f-d7e1-4b5c-9e7f-f1b679b59158 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.508003219 +0000 UTC m=+4.987739397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert") pod "controller-manager-c98c8d65f-kh9fk" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158") : secret "serving-cert" not found Mar 18 09:54:48.008217 master-0 kubenswrapper[8244]: E0318 09:54:48.008133 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca podName:de85e63f-d7e1-4b5c-9e7f-f1b679b59158 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:48.508110971 +0000 UTC m=+4.987847139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca") pod "controller-manager-c98c8d65f-kh9fk" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158") : configmap "client-ca" not found Mar 18 09:54:48.009518 master-0 kubenswrapper[8244]: I0318 09:54:48.009469 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-proxy-ca-bundles\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.034113 master-0 kubenswrapper[8244]: I0318 09:54:48.034074 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6gsl\" (UniqueName: \"kubernetes.io/projected/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-kube-api-access-p6gsl\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.184677 master-0 kubenswrapper[8244]: I0318 09:54:48.184610 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:48.184895 master-0 kubenswrapper[8244]: I0318 09:54:48.184805 8244 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:54:48.197204 master-0 kubenswrapper[8244]: I0318 09:54:48.197144 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:54:48.224994 master-0 kubenswrapper[8244]: I0318 09:54:48.224923 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:48.514381 master-0 kubenswrapper[8244]: I0318 09:54:48.514305 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:48.514381 master-0 kubenswrapper[8244]: I0318 09:54:48.514374 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:48.514621 master-0 kubenswrapper[8244]: I0318 09:54:48.514402 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:48.514621 master-0 kubenswrapper[8244]: I0318 09:54:48.514428 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:48.514621 master-0 kubenswrapper[8244]: I0318 09:54:48.514451 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:48.514621 master-0 kubenswrapper[8244]: E0318 09:54:48.514466 8244 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 09:54:48.514621 master-0 kubenswrapper[8244]: E0318 09:54:48.514532 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.514509903 +0000 UTC m=+8.994246021 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "node-tuning-operator-tls" not found Mar 18 09:54:48.514893 master-0 kubenswrapper[8244]: E0318 09:54:48.514621 8244 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:48.514893 master-0 kubenswrapper[8244]: E0318 09:54:48.514746 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.514716488 +0000 UTC m=+8.994452656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:48.514893 master-0 kubenswrapper[8244]: I0318 09:54:48.514810 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.514987 master-0 kubenswrapper[8244]: I0318 09:54:48.514924 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:48.514987 master-0 kubenswrapper[8244]: E0318 09:54:48.514948 8244 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 09:54:48.514987 master-0 kubenswrapper[8244]: E0318 09:54:48.514978 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.514969424 +0000 UTC m=+8.994705552 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : secret "metrics-daemon-secret" not found Mar 18 09:54:48.515080 master-0 kubenswrapper[8244]: I0318 09:54:48.514998 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:48.515080 master-0 kubenswrapper[8244]: I0318 09:54:48.515024 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:48.515080 master-0 kubenswrapper[8244]: E0318 09:54:48.515039 8244 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:54:48.515168 master-0 kubenswrapper[8244]: E0318 09:54:48.515083 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515069046 +0000 UTC m=+8.994805204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : secret "serving-cert" not found Mar 18 09:54:48.515168 master-0 kubenswrapper[8244]: E0318 09:54:48.515083 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:48.515168 master-0 kubenswrapper[8244]: E0318 09:54:48.515112 8244 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:48.515168 master-0 kubenswrapper[8244]: E0318 09:54:48.515129 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515117798 +0000 UTC m=+8.994853956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:48.515168 master-0 kubenswrapper[8244]: E0318 09:54:48.515150 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515139748 +0000 UTC m=+8.994875906 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:48.515168 master-0 kubenswrapper[8244]: E0318 09:54:48.515153 8244 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:48.515336 master-0 kubenswrapper[8244]: E0318 09:54:48.515192 8244 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:48.515336 master-0 kubenswrapper[8244]: I0318 09:54:48.515046 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:48.515336 master-0 kubenswrapper[8244]: E0318 09:54:48.515256 8244 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:48.515336 master-0 kubenswrapper[8244]: E0318 09:54:48.515195 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls podName:8cb5158f-2199-42c0-995a-8490c9ec8a95 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515184429 +0000 UTC m=+8.994920597 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls") pod "dns-operator-9c5679d8f-jrmkr" (UID: "8cb5158f-2199-42c0-995a-8490c9ec8a95") : secret "metrics-tls" not found Mar 18 09:54:48.515336 master-0 kubenswrapper[8244]: E0318 09:54:48.515289 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515277641 +0000 UTC m=+8.995013809 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:48.515336 master-0 kubenswrapper[8244]: E0318 09:54:48.515296 8244 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:48.515336 master-0 kubenswrapper[8244]: E0318 09:54:48.515323 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca podName:de85e63f-d7e1-4b5c-9e7f-f1b679b59158 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:49.515315362 +0000 UTC m=+5.995051490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca") pod "controller-manager-c98c8d65f-kh9fk" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158") : configmap "client-ca" not found Mar 18 09:54:48.515336 master-0 kubenswrapper[8244]: I0318 09:54:48.515316 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:48.515553 master-0 kubenswrapper[8244]: E0318 09:54:48.515340 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515329043 +0000 UTC m=+8.995065211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:48.515553 master-0 kubenswrapper[8244]: I0318 09:54:48.515369 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:48.515553 master-0 kubenswrapper[8244]: I0318 09:54:48.515415 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:48.515553 master-0 kubenswrapper[8244]: I0318 09:54:48.515453 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:48.515553 master-0 kubenswrapper[8244]: I0318 09:54:48.515492 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:48.515553 master-0 kubenswrapper[8244]: E0318 09:54:48.515378 8244 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:48.515553 master-0 kubenswrapper[8244]: I0318 09:54:48.515523 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:48.515553 master-0 kubenswrapper[8244]: E0318 09:54:48.515541 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert podName:c2635254-a491-42e5-b598-461c24bf77ca nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515531918 +0000 UTC m=+8.995268046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-s7rm6" (UID: "c2635254-a491-42e5-b598-461c24bf77ca") : secret "performance-addon-operator-webhook-cert" not found Mar 18 09:54:48.515553 master-0 kubenswrapper[8244]: E0318 09:54:48.515479 8244 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515564 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert podName:de85e63f-d7e1-4b5c-9e7f-f1b679b59158 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:49.515559258 +0000 UTC m=+5.995295386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert") pod "controller-manager-c98c8d65f-kh9fk" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158") : secret "serving-cert" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515512 8244 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: I0318 09:54:48.515581 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515599 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert podName:15f8941b-dba2-40ba-86d5-3318f5b635cc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515588899 +0000 UTC m=+8.995325127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert") pod "cluster-version-operator-56d8475767-c2qzr" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc") : secret "cluster-version-operator-serving-cert" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515652 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515671 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515666231 +0000 UTC m=+8.995402359 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515709 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515723 8244 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515748 8244 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515726 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515719742 +0000 UTC m=+8.995455870 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515781 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515768793 +0000 UTC m=+8.995504951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : configmap "client-ca" not found Mar 18 09:54:48.515878 master-0 kubenswrapper[8244]: E0318 09:54:48.515801 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls podName:8ee99294-4785-49d0-b493-0d734cf09396 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:52.515791064 +0000 UTC m=+8.995527222 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-f4f7m" (UID: "8ee99294-4785-49d0-b493-0d734cf09396") : secret "image-registry-operator-tls" not found Mar 18 09:54:48.975906 master-0 kubenswrapper[8244]: I0318 09:54:48.974369 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:48.990535 master-0 kubenswrapper[8244]: I0318 09:54:48.990265 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:49.573249 master-0 kubenswrapper[8244]: I0318 09:54:49.573140 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:49.573249 master-0 kubenswrapper[8244]: E0318 09:54:49.573283 8244 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:54:49.573734 master-0 kubenswrapper[8244]: E0318 09:54:49.573346 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert podName:de85e63f-d7e1-4b5c-9e7f-f1b679b59158 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:51.573331134 +0000 UTC m=+8.053067262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert") pod "controller-manager-c98c8d65f-kh9fk" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158") : secret "serving-cert" not found Mar 18 09:54:49.573734 master-0 kubenswrapper[8244]: I0318 09:54:49.573391 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:49.573734 master-0 kubenswrapper[8244]: E0318 09:54:49.573471 8244 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:49.573734 master-0 kubenswrapper[8244]: E0318 09:54:49.573491 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca podName:de85e63f-d7e1-4b5c-9e7f-f1b679b59158 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:51.573484838 +0000 UTC m=+8.053220966 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca") pod "controller-manager-c98c8d65f-kh9fk" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158") : configmap "client-ca" not found Mar 18 09:54:49.736926 master-0 kubenswrapper[8244]: I0318 09:54:49.736886 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025ade16-8502-4b71-a4be-f13dee081e3a" path="/var/lib/kubelet/pods/025ade16-8502-4b71-a4be-f13dee081e3a/volumes" Mar 18 09:54:49.893386 master-0 kubenswrapper[8244]: I0318 09:54:49.893266 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" event={"ID":"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae","Type":"ContainerStarted","Data":"2892fe5cd2057b58ba353ccee76d5af0b42158f4b4682e11195810d96c676dbd"} Mar 18 09:54:49.893386 master-0 kubenswrapper[8244]: I0318 09:54:49.893344 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" event={"ID":"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae","Type":"ContainerStarted","Data":"394136b16764c9fc827dc10b9cd0ccfede02cbe3e3c4751b9a528163350ea0df"} Mar 18 09:54:49.896090 master-0 kubenswrapper[8244]: I0318 09:54:49.895331 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" event={"ID":"932a70df-3afe-4873-9449-ab6e061d3fe3","Type":"ContainerStarted","Data":"17c5a6d0d57e33e7edf72cf60a77174890881333b1c35130459a5598516f267c"} Mar 18 09:54:49.899508 master-0 kubenswrapper[8244]: I0318 09:54:49.899464 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:54:51.461574 master-0 kubenswrapper[8244]: I0318 09:54:51.461190 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:51.517860 master-0 kubenswrapper[8244]: I0318 09:54:51.517537 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:51.554859 master-0 kubenswrapper[8244]: I0318 09:54:51.554685 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:54:51.605852 master-0 kubenswrapper[8244]: I0318 09:54:51.603024 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:51.605852 master-0 kubenswrapper[8244]: I0318 09:54:51.603279 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:51.605852 master-0 kubenswrapper[8244]: E0318 09:54:51.603533 8244 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:51.605852 master-0 kubenswrapper[8244]: E0318 09:54:51.603623 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca podName:de85e63f-d7e1-4b5c-9e7f-f1b679b59158 nodeName:}" failed. No retries permitted until 2026-03-18 09:54:55.603603463 +0000 UTC m=+12.083339591 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca") pod "controller-manager-c98c8d65f-kh9fk" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158") : configmap "client-ca" not found Mar 18 09:54:51.617880 master-0 kubenswrapper[8244]: I0318 09:54:51.617435 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:51.910848 master-0 kubenswrapper[8244]: I0318 09:54:51.910684 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" event={"ID":"d26036f1-bdce-4ec5-873f-962fa7e8e6c1","Type":"ContainerStarted","Data":"4404e590fec7407faf870aa1aae084da39b8f0b6251730c82fd52357f9b81e01"} Mar 18 09:54:51.918767 master-0 kubenswrapper[8244]: I0318 09:54:51.918681 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:52.298668 master-0 kubenswrapper[8244]: I0318 09:54:52.298624 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:52.302233 master-0 kubenswrapper[8244]: I0318 09:54:52.302198 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:52.614490 master-0 kubenswrapper[8244]: I0318 09:54:52.614363 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:54:52.614490 master-0 kubenswrapper[8244]: I0318 09:54:52.614424 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:52.614490 master-0 kubenswrapper[8244]: I0318 09:54:52.614455 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:52.614490 master-0 kubenswrapper[8244]: I0318 09:54:52.614481 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.614509 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.614538 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.614567 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.614551062 +0000 UTC m=+17.094287190 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.614584 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.614742 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.614835 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.614872 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.614893 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.614923 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.614941 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.614772 8244 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615055 8244 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615099 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.615061454 +0000 UTC m=+17.094797642 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : configmap "client-ca" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615124 8244 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.614985 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615125 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.615116045 +0000 UTC m=+17.094852283 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : secret "serving-cert" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615161 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.615145016 +0000 UTC m=+17.094881144 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615194 8244 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615212 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.615206428 +0000 UTC m=+17.094942556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.615186 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: I0318 09:54:52.615240 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615257 8244 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615290 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls podName:accc57fb-75f5-4f89-9804-6ede7f77e27c nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.615281139 +0000 UTC m=+17.095017347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls") pod "ingress-operator-66b84d69b-kr5kz" (UID: "accc57fb-75f5-4f89-9804-6ede7f77e27c") : secret "metrics-tls" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.614798 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615313 8244 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615322 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.61531421 +0000 UTC m=+17.095050448 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:54:52.615326 master-0 kubenswrapper[8244]: E0318 09:54:52.615338 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.615330381 +0000 UTC m=+17.095066509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:54:52.616255 master-0 kubenswrapper[8244]: E0318 09:54:52.615414 8244 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 09:54:52.616255 master-0 kubenswrapper[8244]: E0318 09:54:52.615471 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.615449904 +0000 UTC m=+17.095186092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : secret "metrics-daemon-secret" not found Mar 18 09:54:52.616255 master-0 kubenswrapper[8244]: E0318 09:54:52.614816 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:54:52.616255 master-0 kubenswrapper[8244]: E0318 09:54:52.615508 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:55:00.615500725 +0000 UTC m=+17.095236953 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:54:52.618136 master-0 kubenswrapper[8244]: I0318 09:54:52.617698 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:52.618136 master-0 kubenswrapper[8244]: I0318 09:54:52.618045 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:52.624408 master-0 kubenswrapper[8244]: I0318 09:54:52.624374 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:52.624722 master-0 kubenswrapper[8244]: I0318 09:54:52.624689 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"cluster-version-operator-56d8475767-c2qzr\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:52.625021 master-0 kubenswrapper[8244]: I0318 09:54:52.624985 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:52.786260 master-0 kubenswrapper[8244]: I0318 09:54:52.786192 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:54:52.791195 master-0 kubenswrapper[8244]: I0318 09:54:52.791167 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 09:54:52.792416 master-0 kubenswrapper[8244]: I0318 09:54:52.792215 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 09:54:52.793548 master-0 kubenswrapper[8244]: I0318 09:54:52.793512 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 09:54:52.838138 master-0 kubenswrapper[8244]: W0318 09:54:52.834652 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15f8941b_dba2_40ba_86d5_3318f5b635cc.slice/crio-fec9f6ce2363bfece5842c76139bc154b3ddbc4bb405022d03bffec1a7a4ae73 WatchSource:0}: Error finding container fec9f6ce2363bfece5842c76139bc154b3ddbc4bb405022d03bffec1a7a4ae73: Status 404 returned error can't find the container with id fec9f6ce2363bfece5842c76139bc154b3ddbc4bb405022d03bffec1a7a4ae73 Mar 18 09:54:52.936747 master-0 kubenswrapper[8244]: I0318 09:54:52.918891 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" event={"ID":"15f8941b-dba2-40ba-86d5-3318f5b635cc","Type":"ContainerStarted","Data":"fec9f6ce2363bfece5842c76139bc154b3ddbc4bb405022d03bffec1a7a4ae73"} Mar 18 09:54:52.936747 master-0 kubenswrapper[8244]: I0318 09:54:52.930238 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:54:55.127922 master-0 kubenswrapper[8244]: I0318 09:54:55.123659 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m"] Mar 18 09:54:55.127922 master-0 kubenswrapper[8244]: I0318 09:54:55.125273 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6"] Mar 18 09:54:55.127922 master-0 kubenswrapper[8244]: I0318 09:54:55.126665 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-jrmkr"] Mar 18 09:54:55.149273 master-0 kubenswrapper[8244]: W0318 09:54:55.149211 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2635254_a491_42e5_b598_461c24bf77ca.slice/crio-274e9b834559b126c9207a26c34fb18f9b1812e69065a033951f8808dc379847 WatchSource:0}: Error finding container 274e9b834559b126c9207a26c34fb18f9b1812e69065a033951f8808dc379847: Status 404 returned error can't find the container with id 274e9b834559b126c9207a26c34fb18f9b1812e69065a033951f8808dc379847 Mar 18 09:54:55.154053 master-0 kubenswrapper[8244]: W0318 09:54:55.153995 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ee99294_4785_49d0_b493_0d734cf09396.slice/crio-13ead1a9d130e4cdb9a3e1038d5bbe3813860bfedd951bc71fd7108de36c6c88 WatchSource:0}: Error finding container 13ead1a9d130e4cdb9a3e1038d5bbe3813860bfedd951bc71fd7108de36c6c88: Status 404 returned error can't find the container with id 13ead1a9d130e4cdb9a3e1038d5bbe3813860bfedd951bc71fd7108de36c6c88 Mar 18 09:54:55.580938 master-0 kubenswrapper[8244]: I0318 09:54:55.580560 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:55.581196 master-0 kubenswrapper[8244]: I0318 09:54:55.581109 8244 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:54:55.581196 master-0 kubenswrapper[8244]: I0318 09:54:55.581124 8244 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:54:55.614382 master-0 kubenswrapper[8244]: I0318 09:54:55.614320 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:54:55.651349 master-0 kubenswrapper[8244]: I0318 09:54:55.651284 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:54:55.651502 master-0 kubenswrapper[8244]: E0318 09:54:55.651401 8244 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:54:55.651502 master-0 kubenswrapper[8244]: E0318 09:54:55.651469 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca podName:de85e63f-d7e1-4b5c-9e7f-f1b679b59158 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:03.651452076 +0000 UTC m=+20.131188214 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca") pod "controller-manager-c98c8d65f-kh9fk" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158") : configmap "client-ca" not found Mar 18 09:54:56.080147 master-0 kubenswrapper[8244]: I0318 09:54:56.080068 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" event={"ID":"8cb5158f-2199-42c0-995a-8490c9ec8a95","Type":"ContainerStarted","Data":"9c1ce07b6c7993e6988dcb73b0d0ae149fc17c7c6fa96dc548353a31db24514c"} Mar 18 09:54:56.082015 master-0 kubenswrapper[8244]: I0318 09:54:56.081973 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" event={"ID":"8ee99294-4785-49d0-b493-0d734cf09396","Type":"ContainerStarted","Data":"13ead1a9d130e4cdb9a3e1038d5bbe3813860bfedd951bc71fd7108de36c6c88"} Mar 18 09:54:56.083265 master-0 kubenswrapper[8244]: I0318 09:54:56.083241 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" event={"ID":"c2635254-a491-42e5-b598-461c24bf77ca","Type":"ContainerStarted","Data":"274e9b834559b126c9207a26c34fb18f9b1812e69065a033951f8808dc379847"} Mar 18 09:54:56.083334 master-0 kubenswrapper[8244]: I0318 09:54:56.083271 8244 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:55:00.105040 master-0 kubenswrapper[8244]: I0318 09:55:00.104970 8244 generic.go:334] "Generic (PLEG): container finished" podID="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" containerID="4404e590fec7407faf870aa1aae084da39b8f0b6251730c82fd52357f9b81e01" exitCode=0 Mar 18 09:55:00.105878 master-0 kubenswrapper[8244]: I0318 09:55:00.105029 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" event={"ID":"d26036f1-bdce-4ec5-873f-962fa7e8e6c1","Type":"ContainerDied","Data":"4404e590fec7407faf870aa1aae084da39b8f0b6251730c82fd52357f9b81e01"} Mar 18 09:55:00.105878 master-0 kubenswrapper[8244]: I0318 09:55:00.105672 8244 scope.go:117] "RemoveContainer" containerID="4404e590fec7407faf870aa1aae084da39b8f0b6251730c82fd52357f9b81e01" Mar 18 09:55:00.622591 master-0 kubenswrapper[8244]: I0318 09:55:00.622044 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:55:00.622591 master-0 kubenswrapper[8244]: I0318 09:55:00.622488 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:55:00.622591 master-0 kubenswrapper[8244]: E0318 09:55:00.622353 8244 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 09:55:00.623007 master-0 kubenswrapper[8244]: I0318 09:55:00.622550 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:55:00.623007 master-0 kubenswrapper[8244]: E0318 09:55:00.622696 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls podName:f69a00b6-d908-4485-bb0d-57594fc01d24 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:16.622664331 +0000 UTC m=+33.102400489 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8kx9m" (UID: "f69a00b6-d908-4485-bb0d-57594fc01d24") : secret "cluster-monitoring-operator-tls" not found Mar 18 09:55:00.623007 master-0 kubenswrapper[8244]: E0318 09:55:00.622780 8244 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 09:55:00.623007 master-0 kubenswrapper[8244]: E0318 09:55:00.622943 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics podName:6f266bad-8b30-4300-ad93-9d48e61f2440 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:16.622910697 +0000 UTC m=+33.102646855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-2glpv" (UID: "6f266bad-8b30-4300-ad93-9d48e61f2440") : secret "marketplace-operator-metrics" not found Mar 18 09:55:00.623007 master-0 kubenswrapper[8244]: E0318 09:55:00.622956 8244 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 09:55:00.623007 master-0 kubenswrapper[8244]: I0318 09:55:00.622817 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:55:00.623391 master-0 kubenswrapper[8244]: E0318 09:55:00.622976 8244 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 09:55:00.623391 master-0 kubenswrapper[8244]: E0318 09:55:00.623032 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:55:16.623008889 +0000 UTC m=+33.102745057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : secret "serving-cert" not found Mar 18 09:55:00.623391 master-0 kubenswrapper[8244]: I0318 09:55:00.623134 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:55:00.623391 master-0 kubenswrapper[8244]: I0318 09:55:00.623189 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:55:00.623391 master-0 kubenswrapper[8244]: E0318 09:55:00.623267 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs podName:0442ec6c-5973-40a5-a0c3-dc02de46d343 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:16.623248685 +0000 UTC m=+33.102984843 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs") pod "network-metrics-daemon-tbxt4" (UID: "0442ec6c-5973-40a5-a0c3-dc02de46d343") : secret "metrics-daemon-secret" not found Mar 18 09:55:00.623391 master-0 kubenswrapper[8244]: E0318 09:55:00.623385 8244 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 09:55:00.623727 master-0 kubenswrapper[8244]: E0318 09:55:00.623441 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 09:55:00.623727 master-0 kubenswrapper[8244]: E0318 09:55:00.623456 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs podName:ca4a0040-a638-46fa-a1cb-a19d83a7ebe4 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:16.62343589 +0000 UTC m=+33.103172058 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-hkzr2" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4") : secret "multus-admission-controller-secret" not found Mar 18 09:55:00.623727 master-0 kubenswrapper[8244]: E0318 09:55:00.623496 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert podName:d4d2218c-f9df-4d43-8727-ed3a920e23f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:16.623480601 +0000 UTC m=+33.103216759 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-r8fkv" (UID: "d4d2218c-f9df-4d43-8727-ed3a920e23f7") : secret "package-server-manager-serving-cert" not found Mar 18 09:55:00.623727 master-0 kubenswrapper[8244]: I0318 09:55:00.623243 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:55:00.623727 master-0 kubenswrapper[8244]: I0318 09:55:00.623562 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:55:00.623727 master-0 kubenswrapper[8244]: I0318 09:55:00.623599 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:55:00.623727 master-0 kubenswrapper[8244]: I0318 09:55:00.623636 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca\") pod \"route-controller-manager-966f67d76-nzx5f\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:55:00.623727 master-0 kubenswrapper[8244]: E0318 09:55:00.623713 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 09:55:00.623727 master-0 kubenswrapper[8244]: E0318 09:55:00.623720 8244 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 09:55:00.624284 master-0 kubenswrapper[8244]: E0318 09:55:00.623763 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert podName:ee376320-9ca0-444d-ab37-9cbcb6729b11 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:16.623748517 +0000 UTC m=+33.103484675 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert") pod "catalog-operator-68f85b4d6c-fhz5s" (UID: "ee376320-9ca0-444d-ab37-9cbcb6729b11") : secret "catalog-operator-serving-cert" not found Mar 18 09:55:00.624284 master-0 kubenswrapper[8244]: E0318 09:55:00.623764 8244 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:55:00.624284 master-0 kubenswrapper[8244]: E0318 09:55:00.623787 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert podName:db52ca42-e458-407f-9eeb-bf6de6405edc nodeName:}" failed. No retries permitted until 2026-03-18 09:55:16.623775678 +0000 UTC m=+33.103511836 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert") pod "olm-operator-5c9796789-hc74k" (UID: "db52ca42-e458-407f-9eeb-bf6de6405edc") : secret "olm-operator-serving-cert" not found Mar 18 09:55:00.624284 master-0 kubenswrapper[8244]: E0318 09:55:00.623813 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca podName:c2d6bbbe-b2fb-4df3-b93f-b3b47532719b nodeName:}" failed. No retries permitted until 2026-03-18 09:55:16.623798828 +0000 UTC m=+33.103534996 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca") pod "route-controller-manager-966f67d76-nzx5f" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b") : configmap "client-ca" not found Mar 18 09:55:00.632325 master-0 kubenswrapper[8244]: I0318 09:55:00.632264 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:55:00.893585 master-0 kubenswrapper[8244]: I0318 09:55:00.893449 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 09:55:01.559761 master-0 kubenswrapper[8244]: I0318 09:55:01.559310 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 09:55:02.114705 master-0 kubenswrapper[8244]: I0318 09:55:02.114651 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" event={"ID":"d26036f1-bdce-4ec5-873f-962fa7e8e6c1","Type":"ContainerStarted","Data":"0e2eb9f88477dff52f2e8f12bdb93c5b6461b1901f2eeb98ccf29a08010685ef"} Mar 18 09:55:02.680852 master-0 kubenswrapper[8244]: I0318 09:55:02.670813 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz"] Mar 18 09:55:02.754580 master-0 kubenswrapper[8244]: I0318 09:55:02.752576 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7649c67db9-6g66t"] Mar 18 09:55:02.754580 master-0 kubenswrapper[8244]: I0318 09:55:02.753329 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.760166 master-0 kubenswrapper[8244]: I0318 09:55:02.758734 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 09:55:02.760166 master-0 kubenswrapper[8244]: I0318 09:55:02.758969 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 09:55:02.760166 master-0 kubenswrapper[8244]: I0318 09:55:02.759114 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 18 09:55:02.760166 master-0 kubenswrapper[8244]: I0318 09:55:02.759227 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 09:55:02.760166 master-0 kubenswrapper[8244]: I0318 09:55:02.759345 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 09:55:02.760166 master-0 kubenswrapper[8244]: I0318 09:55:02.759476 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 09:55:02.760166 master-0 kubenswrapper[8244]: I0318 09:55:02.759572 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 09:55:02.760166 master-0 kubenswrapper[8244]: I0318 09:55:02.759661 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 18 09:55:02.767303 master-0 kubenswrapper[8244]: I0318 09:55:02.767116 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 09:55:02.775490 master-0 kubenswrapper[8244]: I0318 09:55:02.775389 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 09:55:02.795358 master-0 kubenswrapper[8244]: I0318 09:55:02.795121 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7649c67db9-6g66t"] Mar 18 09:55:02.878342 master-0 kubenswrapper[8244]: I0318 09:55:02.878284 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-serving-cert\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878577 master-0 kubenswrapper[8244]: I0318 09:55:02.878376 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-image-import-ca\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878577 master-0 kubenswrapper[8244]: I0318 09:55:02.878453 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878577 master-0 kubenswrapper[8244]: I0318 09:55:02.878488 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-trusted-ca-bundle\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878577 master-0 kubenswrapper[8244]: I0318 09:55:02.878527 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit-dir\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878745 master-0 kubenswrapper[8244]: I0318 09:55:02.878589 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ph8s\" (UniqueName: \"kubernetes.io/projected/0fca42d9-0447-4762-9898-a05d0e3fe65c-kube-api-access-7ph8s\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878745 master-0 kubenswrapper[8244]: I0318 09:55:02.878688 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-serving-ca\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878745 master-0 kubenswrapper[8244]: I0318 09:55:02.878717 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-node-pullsecrets\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878889 master-0 kubenswrapper[8244]: I0318 09:55:02.878749 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-client\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878889 master-0 kubenswrapper[8244]: I0318 09:55:02.878765 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-encryption-config\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.878889 master-0 kubenswrapper[8244]: I0318 09:55:02.878873 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-config\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980217 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ph8s\" (UniqueName: \"kubernetes.io/projected/0fca42d9-0447-4762-9898-a05d0e3fe65c-kube-api-access-7ph8s\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980311 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-serving-ca\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980332 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-node-pullsecrets\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980362 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-client\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980378 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-encryption-config\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980413 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-config\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980436 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-serving-cert\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980452 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-image-import-ca\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980477 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980494 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-trusted-ca-bundle\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980513 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit-dir\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.980639 master-0 kubenswrapper[8244]: I0318 09:55:02.980643 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit-dir\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.983655 master-0 kubenswrapper[8244]: I0318 09:55:02.981473 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-node-pullsecrets\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.983655 master-0 kubenswrapper[8244]: I0318 09:55:02.981675 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-config\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.983655 master-0 kubenswrapper[8244]: I0318 09:55:02.982167 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-serving-ca\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.983655 master-0 kubenswrapper[8244]: E0318 09:55:02.982233 8244 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 09:55:02.983655 master-0 kubenswrapper[8244]: E0318 09:55:02.982283 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit podName:0fca42d9-0447-4762-9898-a05d0e3fe65c nodeName:}" failed. No retries permitted until 2026-03-18 09:55:03.482264953 +0000 UTC m=+19.962001081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit") pod "apiserver-7649c67db9-6g66t" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c") : configmap "audit-0" not found Mar 18 09:55:02.983655 master-0 kubenswrapper[8244]: I0318 09:55:02.982644 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-image-import-ca\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.983655 master-0 kubenswrapper[8244]: I0318 09:55:02.983471 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-trusted-ca-bundle\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.990446 master-0 kubenswrapper[8244]: I0318 09:55:02.990411 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-client\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:02.991601 master-0 kubenswrapper[8244]: I0318 09:55:02.991535 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-encryption-config\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:03.000729 master-0 kubenswrapper[8244]: I0318 09:55:03.000625 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-serving-cert\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:03.486574 master-0 kubenswrapper[8244]: I0318 09:55:03.486525 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:03.486773 master-0 kubenswrapper[8244]: E0318 09:55:03.486743 8244 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 09:55:03.486881 master-0 kubenswrapper[8244]: E0318 09:55:03.486850 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit podName:0fca42d9-0447-4762-9898-a05d0e3fe65c nodeName:}" failed. No retries permitted until 2026-03-18 09:55:04.486810101 +0000 UTC m=+20.966546299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit") pod "apiserver-7649c67db9-6g66t" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c") : configmap "audit-0" not found Mar 18 09:55:03.690241 master-0 kubenswrapper[8244]: I0318 09:55:03.690155 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca\") pod \"controller-manager-c98c8d65f-kh9fk\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:55:03.691118 master-0 kubenswrapper[8244]: E0318 09:55:03.690442 8244 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 09:55:03.691118 master-0 kubenswrapper[8244]: E0318 09:55:03.690622 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca podName:de85e63f-d7e1-4b5c-9e7f-f1b679b59158 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:19.690595143 +0000 UTC m=+36.170331301 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca") pod "controller-manager-c98c8d65f-kh9fk" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158") : configmap "client-ca" not found Mar 18 09:55:04.499665 master-0 kubenswrapper[8244]: I0318 09:55:04.499433 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:04.499665 master-0 kubenswrapper[8244]: E0318 09:55:04.499594 8244 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 09:55:04.499665 master-0 kubenswrapper[8244]: E0318 09:55:04.499640 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit podName:0fca42d9-0447-4762-9898-a05d0e3fe65c nodeName:}" failed. No retries permitted until 2026-03-18 09:55:06.499626134 +0000 UTC m=+22.979362262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit") pod "apiserver-7649c67db9-6g66t" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c") : configmap "audit-0" not found Mar 18 09:55:05.220289 master-0 kubenswrapper[8244]: I0318 09:55:05.220247 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 09:55:05.221123 master-0 kubenswrapper[8244]: I0318 09:55:05.220717 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.233939 master-0 kubenswrapper[8244]: I0318 09:55:05.233901 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 09:55:05.242467 master-0 kubenswrapper[8244]: I0318 09:55:05.240466 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ph8s\" (UniqueName: \"kubernetes.io/projected/0fca42d9-0447-4762-9898-a05d0e3fe65c-kube-api-access-7ph8s\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:05.250291 master-0 kubenswrapper[8244]: I0318 09:55:05.250232 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 09:55:05.282009 master-0 kubenswrapper[8244]: I0318 09:55:05.281326 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 09:55:05.282009 master-0 kubenswrapper[8244]: I0318 09:55:05.281824 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.307730 master-0 kubenswrapper[8244]: I0318 09:55:05.306279 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 09:55:05.316959 master-0 kubenswrapper[8244]: I0318 09:55:05.311402 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.316959 master-0 kubenswrapper[8244]: I0318 09:55:05.311583 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-var-lock\") pod \"installer-1-master-0\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.316959 master-0 kubenswrapper[8244]: I0318 09:55:05.311624 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be8bd84c-8035-4bec-a725-b0ae89382c0f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.316959 master-0 kubenswrapper[8244]: I0318 09:55:05.315671 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 09:55:05.414855 master-0 kubenswrapper[8244]: I0318 09:55:05.412324 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.414855 master-0 kubenswrapper[8244]: I0318 09:55:05.412478 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-var-lock\") pod \"installer-1-master-0\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.414855 master-0 kubenswrapper[8244]: I0318 09:55:05.412523 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-var-lock\") pod \"installer-1-master-0\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.414855 master-0 kubenswrapper[8244]: I0318 09:55:05.412545 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be8bd84c-8035-4bec-a725-b0ae89382c0f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.414855 master-0 kubenswrapper[8244]: I0318 09:55:05.412568 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.414855 master-0 kubenswrapper[8244]: I0318 09:55:05.412622 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.414855 master-0 kubenswrapper[8244]: I0318 09:55:05.412719 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.414855 master-0 kubenswrapper[8244]: I0318 09:55:05.412756 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-var-lock\") pod \"installer-1-master-0\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.450951 master-0 kubenswrapper[8244]: I0318 09:55:05.450907 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be8bd84c-8035-4bec-a725-b0ae89382c0f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.514373 master-0 kubenswrapper[8244]: I0318 09:55:05.514292 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-var-lock\") pod \"installer-1-master-0\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.514373 master-0 kubenswrapper[8244]: I0318 09:55:05.514343 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.514373 master-0 kubenswrapper[8244]: I0318 09:55:05.514378 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.514623 master-0 kubenswrapper[8244]: I0318 09:55:05.514505 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.514623 master-0 kubenswrapper[8244]: I0318 09:55:05.514543 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-var-lock\") pod \"installer-1-master-0\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.526839 master-0 kubenswrapper[8244]: I0318 09:55:05.526197 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c98c8d65f-kh9fk"] Mar 18 09:55:05.527466 master-0 kubenswrapper[8244]: E0318 09:55:05.527430 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" podUID="de85e63f-d7e1-4b5c-9e7f-f1b679b59158" Mar 18 09:55:05.543742 master-0 kubenswrapper[8244]: I0318 09:55:05.543217 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f"] Mar 18 09:55:05.543742 master-0 kubenswrapper[8244]: E0318 09:55:05.543568 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" podUID="c2d6bbbe-b2fb-4df3-b93f-b3b47532719b" Mar 18 09:55:05.562973 master-0 kubenswrapper[8244]: I0318 09:55:05.558987 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:05.608529 master-0 kubenswrapper[8244]: I0318 09:55:05.608086 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:05.653384 master-0 kubenswrapper[8244]: I0318 09:55:05.653312 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:06.047904 master-0 kubenswrapper[8244]: I0318 09:55:06.047852 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7649c67db9-6g66t"] Mar 18 09:55:06.048206 master-0 kubenswrapper[8244]: E0318 09:55:06.048143 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-7649c67db9-6g66t" podUID="0fca42d9-0447-4762-9898-a05d0e3fe65c" Mar 18 09:55:06.144023 master-0 kubenswrapper[8244]: I0318 09:55:06.143973 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerStarted","Data":"1ddd0ca0bee2bbed601ee28c1df5999ea68981b20d1c0067b52437a2649e11aa"} Mar 18 09:55:06.146265 master-0 kubenswrapper[8244]: I0318 09:55:06.146236 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:06.146990 master-0 kubenswrapper[8244]: I0318 09:55:06.146961 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" event={"ID":"15f8941b-dba2-40ba-86d5-3318f5b635cc","Type":"ContainerStarted","Data":"3240a480121627439aed1343343e4db9fb31cb5c32e8ae0ecc6751df89afe086"} Mar 18 09:55:06.147041 master-0 kubenswrapper[8244]: I0318 09:55:06.147004 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:55:06.147075 master-0 kubenswrapper[8244]: I0318 09:55:06.147061 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:55:06.152276 master-0 kubenswrapper[8244]: I0318 09:55:06.152254 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:06.156577 master-0 kubenswrapper[8244]: I0318 09:55:06.156553 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:55:06.164278 master-0 kubenswrapper[8244]: I0318 09:55:06.164250 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227161 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-encryption-config\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227203 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6gsl\" (UniqueName: \"kubernetes.io/projected/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-kube-api-access-p6gsl\") pod \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227228 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-serving-cert\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227252 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-config\") pod \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227274 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fn45\" (UniqueName: \"kubernetes.io/projected/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-kube-api-access-4fn45\") pod \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\" (UID: \"c2d6bbbe-b2fb-4df3-b93f-b3b47532719b\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227301 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ph8s\" (UniqueName: \"kubernetes.io/projected/0fca42d9-0447-4762-9898-a05d0e3fe65c-kube-api-access-7ph8s\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227356 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-client\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227378 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-node-pullsecrets\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227416 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227816 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-config" (OuterVolumeSpecName: "config") pod "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227923 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-config\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227952 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-proxy-ca-bundles\") pod \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.227991 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert\") pod \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.228007 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-serving-ca\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.228025 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-image-import-ca\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.228055 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-trusted-ca-bundle\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.228074 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit-dir\") pod \"0fca42d9-0447-4762-9898-a05d0e3fe65c\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.228100 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-config\") pod \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\" (UID: \"de85e63f-d7e1-4b5c-9e7f-f1b679b59158\") " Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.228420 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "de85e63f-d7e1-4b5c-9e7f-f1b679b59158" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.228624 8244 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.228644 8244 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.228658 8244 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.229325 master-0 kubenswrapper[8244]: I0318 09:55:06.229110 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-config" (OuterVolumeSpecName: "config") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:06.230889 master-0 kubenswrapper[8244]: I0318 09:55:06.229877 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:06.230889 master-0 kubenswrapper[8244]: I0318 09:55:06.230503 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:06.230994 master-0 kubenswrapper[8244]: I0318 09:55:06.230910 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-kube-api-access-p6gsl" (OuterVolumeSpecName: "kube-api-access-p6gsl") pod "de85e63f-d7e1-4b5c-9e7f-f1b679b59158" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158"). InnerVolumeSpecName "kube-api-access-p6gsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:06.230994 master-0 kubenswrapper[8244]: I0318 09:55:06.230918 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:55:06.230994 master-0 kubenswrapper[8244]: I0318 09:55:06.230978 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:06.231123 master-0 kubenswrapper[8244]: I0318 09:55:06.231066 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fca42d9-0447-4762-9898-a05d0e3fe65c-kube-api-access-7ph8s" (OuterVolumeSpecName: "kube-api-access-7ph8s") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "kube-api-access-7ph8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:06.231313 master-0 kubenswrapper[8244]: I0318 09:55:06.231275 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-kube-api-access-4fn45" (OuterVolumeSpecName: "kube-api-access-4fn45") pod "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b" (UID: "c2d6bbbe-b2fb-4df3-b93f-b3b47532719b"). InnerVolumeSpecName "kube-api-access-4fn45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:06.231389 master-0 kubenswrapper[8244]: I0318 09:55:06.231367 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-config" (OuterVolumeSpecName: "config") pod "de85e63f-d7e1-4b5c-9e7f-f1b679b59158" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:06.231479 master-0 kubenswrapper[8244]: I0318 09:55:06.231427 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:06.231587 master-0 kubenswrapper[8244]: I0318 09:55:06.231540 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:55:06.232487 master-0 kubenswrapper[8244]: I0318 09:55:06.232454 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "0fca42d9-0447-4762-9898-a05d0e3fe65c" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:55:06.233107 master-0 kubenswrapper[8244]: I0318 09:55:06.233076 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de85e63f-d7e1-4b5c-9e7f-f1b679b59158" (UID: "de85e63f-d7e1-4b5c-9e7f-f1b679b59158"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329787 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fn45\" (UniqueName: \"kubernetes.io/projected/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-kube-api-access-4fn45\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329836 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ph8s\" (UniqueName: \"kubernetes.io/projected/0fca42d9-0447-4762-9898-a05d0e3fe65c-kube-api-access-7ph8s\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329846 8244 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329855 8244 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329882 8244 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329891 8244 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329899 8244 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329907 8244 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329915 8244 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329925 8244 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329933 8244 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.329917 master-0 kubenswrapper[8244]: I0318 09:55:06.329942 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6gsl\" (UniqueName: \"kubernetes.io/projected/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-kube-api-access-p6gsl\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.330389 master-0 kubenswrapper[8244]: I0318 09:55:06.329951 8244 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fca42d9-0447-4762-9898-a05d0e3fe65c-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:06.509982 master-0 kubenswrapper[8244]: I0318 09:55:06.509187 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 09:55:06.529730 master-0 kubenswrapper[8244]: I0318 09:55:06.529631 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 09:55:06.531807 master-0 kubenswrapper[8244]: I0318 09:55:06.531763 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit\") pod \"apiserver-7649c67db9-6g66t\" (UID: \"0fca42d9-0447-4762-9898-a05d0e3fe65c\") " pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:06.531984 master-0 kubenswrapper[8244]: E0318 09:55:06.531963 8244 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 09:55:06.532035 master-0 kubenswrapper[8244]: E0318 09:55:06.532024 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit podName:0fca42d9-0447-4762-9898-a05d0e3fe65c nodeName:}" failed. No retries permitted until 2026-03-18 09:55:10.532006344 +0000 UTC m=+27.011742492 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit") pod "apiserver-7649c67db9-6g66t" (UID: "0fca42d9-0447-4762-9898-a05d0e3fe65c") : configmap "audit-0" not found Mar 18 09:55:06.541969 master-0 kubenswrapper[8244]: W0318 09:55:06.541798 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2b86644b_ddbd_4b14_b82d_b7d614f7f81e.slice/crio-ff35f1dafa8906a2135f2102b22f8fe7a33132cca04a5b8496f6ffb0a27e700f WatchSource:0}: Error finding container ff35f1dafa8906a2135f2102b22f8fe7a33132cca04a5b8496f6ffb0a27e700f: Status 404 returned error can't find the container with id ff35f1dafa8906a2135f2102b22f8fe7a33132cca04a5b8496f6ffb0a27e700f Mar 18 09:55:07.191196 master-0 kubenswrapper[8244]: I0318 09:55:07.191067 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"2b86644b-ddbd-4b14-b82d-b7d614f7f81e","Type":"ContainerStarted","Data":"826610ccc7ba64519b97c82e3e527d6dc4e2a131529f71a75f5c480a046f7aa6"} Mar 18 09:55:07.191196 master-0 kubenswrapper[8244]: I0318 09:55:07.191138 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"2b86644b-ddbd-4b14-b82d-b7d614f7f81e","Type":"ContainerStarted","Data":"ff35f1dafa8906a2135f2102b22f8fe7a33132cca04a5b8496f6ffb0a27e700f"} Mar 18 09:55:07.205844 master-0 kubenswrapper[8244]: I0318 09:55:07.205505 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"be8bd84c-8035-4bec-a725-b0ae89382c0f","Type":"ContainerStarted","Data":"acbbc72042bd93d1606b83c55c35f1b48dc5dce61f6ad5d66183b045a74dff9a"} Mar 18 09:55:07.205844 master-0 kubenswrapper[8244]: I0318 09:55:07.205560 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"be8bd84c-8035-4bec-a725-b0ae89382c0f","Type":"ContainerStarted","Data":"cf4889e117bb83c7e1a1800e9a36e897d1db0934994a8b13923df3be14b35ebb"} Mar 18 09:55:07.208863 master-0 kubenswrapper[8244]: I0318 09:55:07.208838 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f" Mar 18 09:55:07.220224 master-0 kubenswrapper[8244]: I0318 09:55:07.211936 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7649c67db9-6g66t" Mar 18 09:55:07.220224 master-0 kubenswrapper[8244]: I0318 09:55:07.212145 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" event={"ID":"8cb5158f-2199-42c0-995a-8490c9ec8a95","Type":"ContainerStarted","Data":"a032d2b0d314d678fd1509008a81ad2b334c00b15d0b9fdd6a218c9f6ebdd4ba"} Mar 18 09:55:07.220224 master-0 kubenswrapper[8244]: I0318 09:55:07.212750 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" event={"ID":"8cb5158f-2199-42c0-995a-8490c9ec8a95","Type":"ContainerStarted","Data":"f12bf137dcc1b8b4bc4b768e94250a014f875c5b9d3f913108b98140f799dca4"} Mar 18 09:55:07.220224 master-0 kubenswrapper[8244]: I0318 09:55:07.212901 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c98c8d65f-kh9fk" Mar 18 09:55:07.220224 master-0 kubenswrapper[8244]: I0318 09:55:07.213795 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.213778635 podStartE2EDuration="2.213778635s" podCreationTimestamp="2026-03-18 09:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:07.211574702 +0000 UTC m=+23.691310820" watchObservedRunningTime="2026-03-18 09:55:07.213778635 +0000 UTC m=+23.693514753" Mar 18 09:55:07.256870 master-0 kubenswrapper[8244]: I0318 09:55:07.253461 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=2.253438269 podStartE2EDuration="2.253438269s" podCreationTimestamp="2026-03-18 09:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:07.252155408 +0000 UTC m=+23.731891556" watchObservedRunningTime="2026-03-18 09:55:07.253438269 +0000 UTC m=+23.733174407" Mar 18 09:55:07.317126 master-0 kubenswrapper[8244]: I0318 09:55:07.317074 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7667954bb7-xws4c"] Mar 18 09:55:07.322483 master-0 kubenswrapper[8244]: I0318 09:55:07.321836 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c98c8d65f-kh9fk"] Mar 18 09:55:07.322483 master-0 kubenswrapper[8244]: I0318 09:55:07.321874 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c98c8d65f-kh9fk"] Mar 18 09:55:07.322483 master-0 kubenswrapper[8244]: I0318 09:55:07.321965 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.324987 master-0 kubenswrapper[8244]: I0318 09:55:07.324949 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7667954bb7-xws4c"] Mar 18 09:55:07.327481 master-0 kubenswrapper[8244]: I0318 09:55:07.327455 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:55:07.328704 master-0 kubenswrapper[8244]: I0318 09:55:07.328671 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:55:07.328778 master-0 kubenswrapper[8244]: I0318 09:55:07.328763 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:55:07.329098 master-0 kubenswrapper[8244]: I0318 09:55:07.329002 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:55:07.329167 master-0 kubenswrapper[8244]: I0318 09:55:07.329112 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:55:07.335372 master-0 kubenswrapper[8244]: I0318 09:55:07.335333 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:55:07.347926 master-0 kubenswrapper[8244]: I0318 09:55:07.347882 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f"] Mar 18 09:55:07.353799 master-0 kubenswrapper[8244]: I0318 09:55:07.353737 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-966f67d76-nzx5f"] Mar 18 09:55:07.380774 master-0 kubenswrapper[8244]: I0318 09:55:07.380736 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7649c67db9-6g66t"] Mar 18 09:55:07.384349 master-0 kubenswrapper[8244]: I0318 09:55:07.384303 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-7649c67db9-6g66t"] Mar 18 09:55:07.397311 master-0 kubenswrapper[8244]: I0318 09:55:07.397117 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-z9sf5"] Mar 18 09:55:07.397731 master-0 kubenswrapper[8244]: I0318 09:55:07.397671 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:07.399768 master-0 kubenswrapper[8244]: I0318 09:55:07.399537 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 09:55:07.399947 master-0 kubenswrapper[8244]: I0318 09:55:07.399868 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 09:55:07.400025 master-0 kubenswrapper[8244]: I0318 09:55:07.399997 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 09:55:07.400275 master-0 kubenswrapper[8244]: I0318 09:55:07.400176 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 09:55:07.412191 master-0 kubenswrapper[8244]: I0318 09:55:07.412127 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z9sf5"] Mar 18 09:55:07.446631 master-0 kubenswrapper[8244]: I0318 09:55:07.446508 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-client-ca\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.446773 master-0 kubenswrapper[8244]: I0318 09:55:07.446723 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-metrics-tls\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:07.446889 master-0 kubenswrapper[8244]: I0318 09:55:07.446814 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq4rm\" (UniqueName: \"kubernetes.io/projected/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-kube-api-access-vq4rm\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:07.446936 master-0 kubenswrapper[8244]: I0318 09:55:07.446898 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84630bf5-3d03-48ec-9b0c-34034f6181d4-serving-cert\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.446986 master-0 kubenswrapper[8244]: I0318 09:55:07.446949 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-proxy-ca-bundles\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.447077 master-0 kubenswrapper[8244]: I0318 09:55:07.447051 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-config\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.447132 master-0 kubenswrapper[8244]: I0318 09:55:07.447081 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-config-volume\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:07.447132 master-0 kubenswrapper[8244]: I0318 09:55:07.447107 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stt4j\" (UniqueName: \"kubernetes.io/projected/84630bf5-3d03-48ec-9b0c-34034f6181d4-kube-api-access-stt4j\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.447195 master-0 kubenswrapper[8244]: I0318 09:55:07.447166 8244 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0fca42d9-0447-4762-9898-a05d0e3fe65c-audit\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:07.447195 master-0 kubenswrapper[8244]: I0318 09:55:07.447179 8244 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:07.447252 master-0 kubenswrapper[8244]: I0318 09:55:07.447208 8244 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de85e63f-d7e1-4b5c-9e7f-f1b679b59158-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:07.447252 master-0 kubenswrapper[8244]: I0318 09:55:07.447247 8244 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:07.548312 master-0 kubenswrapper[8244]: I0318 09:55:07.548254 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-metrics-tls\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:07.548499 master-0 kubenswrapper[8244]: E0318 09:55:07.548403 8244 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 18 09:55:07.548687 master-0 kubenswrapper[8244]: I0318 09:55:07.548632 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq4rm\" (UniqueName: \"kubernetes.io/projected/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-kube-api-access-vq4rm\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:07.548687 master-0 kubenswrapper[8244]: E0318 09:55:07.548685 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-metrics-tls podName:da04c6fa-4916-4bed-a6b2-cc92bf2ee379 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:08.048665631 +0000 UTC m=+24.528401759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-metrics-tls") pod "dns-default-z9sf5" (UID: "da04c6fa-4916-4bed-a6b2-cc92bf2ee379") : secret "dns-default-metrics-tls" not found Mar 18 09:55:07.548808 master-0 kubenswrapper[8244]: I0318 09:55:07.548726 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84630bf5-3d03-48ec-9b0c-34034f6181d4-serving-cert\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.548886 master-0 kubenswrapper[8244]: I0318 09:55:07.548868 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-proxy-ca-bundles\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.548979 master-0 kubenswrapper[8244]: I0318 09:55:07.548958 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-config\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.549050 master-0 kubenswrapper[8244]: I0318 09:55:07.549033 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-config-volume\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:07.549139 master-0 kubenswrapper[8244]: I0318 09:55:07.549118 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stt4j\" (UniqueName: \"kubernetes.io/projected/84630bf5-3d03-48ec-9b0c-34034f6181d4-kube-api-access-stt4j\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.549201 master-0 kubenswrapper[8244]: I0318 09:55:07.549180 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-client-ca\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.551067 master-0 kubenswrapper[8244]: I0318 09:55:07.551026 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-config\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.551137 master-0 kubenswrapper[8244]: I0318 09:55:07.551070 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-client-ca\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.552928 master-0 kubenswrapper[8244]: I0318 09:55:07.552328 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-proxy-ca-bundles\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.553727 master-0 kubenswrapper[8244]: I0318 09:55:07.553697 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-config-volume\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:07.578073 master-0 kubenswrapper[8244]: I0318 09:55:07.577992 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84630bf5-3d03-48ec-9b0c-34034f6181d4-serving-cert\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.579044 master-0 kubenswrapper[8244]: I0318 09:55:07.578972 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq4rm\" (UniqueName: \"kubernetes.io/projected/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-kube-api-access-vq4rm\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:07.587735 master-0 kubenswrapper[8244]: I0318 09:55:07.586843 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln"] Mar 18 09:55:07.588020 master-0 kubenswrapper[8244]: I0318 09:55:07.587993 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.595610 master-0 kubenswrapper[8244]: I0318 09:55:07.595039 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 09:55:07.596808 master-0 kubenswrapper[8244]: I0318 09:55:07.596483 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stt4j\" (UniqueName: \"kubernetes.io/projected/84630bf5-3d03-48ec-9b0c-34034f6181d4-kube-api-access-stt4j\") pod \"controller-manager-7667954bb7-xws4c\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.605671 master-0 kubenswrapper[8244]: I0318 09:55:07.605589 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 09:55:07.605839 master-0 kubenswrapper[8244]: I0318 09:55:07.605684 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 09:55:07.606713 master-0 kubenswrapper[8244]: I0318 09:55:07.606255 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 09:55:07.606713 master-0 kubenswrapper[8244]: I0318 09:55:07.606385 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 09:55:07.606713 master-0 kubenswrapper[8244]: I0318 09:55:07.606503 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 09:55:07.606713 master-0 kubenswrapper[8244]: I0318 09:55:07.606596 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 09:55:07.606713 master-0 kubenswrapper[8244]: I0318 09:55:07.606467 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 09:55:07.627572 master-0 kubenswrapper[8244]: I0318 09:55:07.627523 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln"] Mar 18 09:55:07.650518 master-0 kubenswrapper[8244]: I0318 09:55:07.650450 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-client\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.650809 master-0 kubenswrapper[8244]: I0318 09:55:07.650614 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-policies\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.650809 master-0 kubenswrapper[8244]: I0318 09:55:07.650699 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw4s4\" (UniqueName: \"kubernetes.io/projected/8b906fc0-f2bf-4586-97e6-921bbd467b65-kube-api-access-rw4s4\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.650809 master-0 kubenswrapper[8244]: I0318 09:55:07.650724 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-serving-cert\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.650809 master-0 kubenswrapper[8244]: I0318 09:55:07.650759 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-encryption-config\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.650809 master-0 kubenswrapper[8244]: I0318 09:55:07.650775 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-trusted-ca-bundle\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.651229 master-0 kubenswrapper[8244]: I0318 09:55:07.650815 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-serving-ca\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.651229 master-0 kubenswrapper[8244]: I0318 09:55:07.650910 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-dir\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.654396 master-0 kubenswrapper[8244]: I0318 09:55:07.654342 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:07.751972 master-0 kubenswrapper[8244]: I0318 09:55:07.751810 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-client\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.751972 master-0 kubenswrapper[8244]: I0318 09:55:07.751931 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-policies\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.751972 master-0 kubenswrapper[8244]: I0318 09:55:07.751978 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-serving-cert\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.752510 master-0 kubenswrapper[8244]: I0318 09:55:07.752003 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw4s4\" (UniqueName: \"kubernetes.io/projected/8b906fc0-f2bf-4586-97e6-921bbd467b65-kube-api-access-rw4s4\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.752510 master-0 kubenswrapper[8244]: I0318 09:55:07.752032 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-encryption-config\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.752510 master-0 kubenswrapper[8244]: I0318 09:55:07.752052 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-trusted-ca-bundle\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.752510 master-0 kubenswrapper[8244]: I0318 09:55:07.752088 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-serving-ca\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.752510 master-0 kubenswrapper[8244]: I0318 09:55:07.752131 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-dir\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.752510 master-0 kubenswrapper[8244]: I0318 09:55:07.752208 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-dir\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.754971 master-0 kubenswrapper[8244]: I0318 09:55:07.754489 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-policies\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.759076 master-0 kubenswrapper[8244]: I0318 09:55:07.757111 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-serving-ca\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.760673 master-0 kubenswrapper[8244]: I0318 09:55:07.759943 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-trusted-ca-bundle\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.765798 master-0 kubenswrapper[8244]: I0318 09:55:07.765509 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-serving-cert\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.767520 master-0 kubenswrapper[8244]: I0318 09:55:07.767099 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-encryption-config\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.768718 master-0 kubenswrapper[8244]: I0318 09:55:07.768684 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fca42d9-0447-4762-9898-a05d0e3fe65c" path="/var/lib/kubelet/pods/0fca42d9-0447-4762-9898-a05d0e3fe65c/volumes" Mar 18 09:55:07.772942 master-0 kubenswrapper[8244]: I0318 09:55:07.772475 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw4s4\" (UniqueName: \"kubernetes.io/projected/8b906fc0-f2bf-4586-97e6-921bbd467b65-kube-api-access-rw4s4\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.773562 master-0 kubenswrapper[8244]: I0318 09:55:07.773540 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d6bbbe-b2fb-4df3-b93f-b3b47532719b" path="/var/lib/kubelet/pods/c2d6bbbe-b2fb-4df3-b93f-b3b47532719b/volumes" Mar 18 09:55:07.774327 master-0 kubenswrapper[8244]: I0318 09:55:07.774287 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-client\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.774563 master-0 kubenswrapper[8244]: I0318 09:55:07.774544 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de85e63f-d7e1-4b5c-9e7f-f1b679b59158" path="/var/lib/kubelet/pods/de85e63f-d7e1-4b5c-9e7f-f1b679b59158/volumes" Mar 18 09:55:07.827133 master-0 kubenswrapper[8244]: I0318 09:55:07.826334 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hjpz8"] Mar 18 09:55:07.827133 master-0 kubenswrapper[8244]: I0318 09:55:07.827028 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hjpz8" Mar 18 09:55:07.850876 master-0 kubenswrapper[8244]: I0318 09:55:07.850776 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7667954bb7-xws4c"] Mar 18 09:55:07.853072 master-0 kubenswrapper[8244]: I0318 09:55:07.852911 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59hld\" (UniqueName: \"kubernetes.io/projected/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-kube-api-access-59hld\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 09:55:07.853682 master-0 kubenswrapper[8244]: I0318 09:55:07.853644 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-hosts-file\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 09:55:07.937202 master-0 kubenswrapper[8244]: I0318 09:55:07.937135 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:07.955069 master-0 kubenswrapper[8244]: I0318 09:55:07.954995 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-hosts-file\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 09:55:07.955241 master-0 kubenswrapper[8244]: I0318 09:55:07.955216 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59hld\" (UniqueName: \"kubernetes.io/projected/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-kube-api-access-59hld\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 09:55:07.959600 master-0 kubenswrapper[8244]: I0318 09:55:07.959544 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-hosts-file\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 09:55:07.986741 master-0 kubenswrapper[8244]: I0318 09:55:07.986189 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59hld\" (UniqueName: \"kubernetes.io/projected/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-kube-api-access-59hld\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 09:55:08.055752 master-0 kubenswrapper[8244]: I0318 09:55:08.055690 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-metrics-tls\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:08.064271 master-0 kubenswrapper[8244]: I0318 09:55:08.064237 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-metrics-tls\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:08.158319 master-0 kubenswrapper[8244]: I0318 09:55:08.158247 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hjpz8" Mar 18 09:55:08.314265 master-0 kubenswrapper[8244]: I0318 09:55:08.314215 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:09.238099 master-0 kubenswrapper[8244]: W0318 09:55:09.238042 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84630bf5_3d03_48ec_9b0c_34034f6181d4.slice/crio-9ee500d397f055af42d968375a2a159609fc36101c1c2b378af379b040c4ff55 WatchSource:0}: Error finding container 9ee500d397f055af42d968375a2a159609fc36101c1c2b378af379b040c4ff55: Status 404 returned error can't find the container with id 9ee500d397f055af42d968375a2a159609fc36101c1c2b378af379b040c4ff55 Mar 18 09:55:09.706065 master-0 kubenswrapper[8244]: I0318 09:55:09.705982 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6"] Mar 18 09:55:09.707333 master-0 kubenswrapper[8244]: I0318 09:55:09.707170 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.719256 master-0 kubenswrapper[8244]: I0318 09:55:09.712693 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:55:09.719256 master-0 kubenswrapper[8244]: I0318 09:55:09.712883 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:55:09.719256 master-0 kubenswrapper[8244]: I0318 09:55:09.713106 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:55:09.719256 master-0 kubenswrapper[8244]: I0318 09:55:09.713388 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:55:09.719256 master-0 kubenswrapper[8244]: I0318 09:55:09.715384 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:55:09.719256 master-0 kubenswrapper[8244]: I0318 09:55:09.715561 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-687747fbb4-k7dnf"] Mar 18 09:55:09.719256 master-0 kubenswrapper[8244]: I0318 09:55:09.716347 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.720107 master-0 kubenswrapper[8244]: I0318 09:55:09.719601 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 09:55:09.720107 master-0 kubenswrapper[8244]: I0318 09:55:09.719667 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 09:55:09.720107 master-0 kubenswrapper[8244]: I0318 09:55:09.719762 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 09:55:09.720107 master-0 kubenswrapper[8244]: I0318 09:55:09.719866 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 09:55:09.721346 master-0 kubenswrapper[8244]: I0318 09:55:09.721321 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 09:55:09.721749 master-0 kubenswrapper[8244]: I0318 09:55:09.721722 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 09:55:09.724503 master-0 kubenswrapper[8244]: I0318 09:55:09.724479 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6"] Mar 18 09:55:09.749316 master-0 kubenswrapper[8244]: I0318 09:55:09.748356 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 09:55:09.749316 master-0 kubenswrapper[8244]: I0318 09:55:09.748932 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 09:55:09.749521 master-0 kubenswrapper[8244]: I0318 09:55:09.749405 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 09:55:09.774667 master-0 kubenswrapper[8244]: I0318 09:55:09.774569 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 09:55:09.779042 master-0 kubenswrapper[8244]: I0318 09:55:09.779000 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-687747fbb4-k7dnf"] Mar 18 09:55:09.795043 master-0 kubenswrapper[8244]: I0318 09:55:09.794994 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-image-import-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.795225 master-0 kubenswrapper[8244]: I0318 09:55:09.795052 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-client\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.795225 master-0 kubenswrapper[8244]: I0318 09:55:09.795083 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-encryption-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.795225 master-0 kubenswrapper[8244]: I0318 09:55:09.795157 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-serving-cert\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.795225 master-0 kubenswrapper[8244]: I0318 09:55:09.795203 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26snd\" (UniqueName: \"kubernetes.io/projected/9f251e1b-0f5d-460f-8152-c9201dba0cff-kube-api-access-26snd\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.795400 master-0 kubenswrapper[8244]: I0318 09:55:09.795230 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f251e1b-0f5d-460f-8152-c9201dba0cff-serving-cert\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.795400 master-0 kubenswrapper[8244]: I0318 09:55:09.795252 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.795400 master-0 kubenswrapper[8244]: I0318 09:55:09.795329 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-549bq\" (UniqueName: \"kubernetes.io/projected/0c7b317c-d141-4e69-9c82-4a5dda6c3248-kube-api-access-549bq\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.795791 master-0 kubenswrapper[8244]: I0318 09:55:09.795755 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.795942 master-0 kubenswrapper[8244]: I0318 09:55:09.795912 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-config\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.795990 master-0 kubenswrapper[8244]: I0318 09:55:09.795949 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-node-pullsecrets\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.796041 master-0 kubenswrapper[8244]: I0318 09:55:09.795994 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit-dir\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.796085 master-0 kubenswrapper[8244]: I0318 09:55:09.796070 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-serving-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.796134 master-0 kubenswrapper[8244]: I0318 09:55:09.796101 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-client-ca\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.796178 master-0 kubenswrapper[8244]: I0318 09:55:09.796138 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-trusted-ca-bundle\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897052 master-0 kubenswrapper[8244]: I0318 09:55:09.897001 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-config\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.897052 master-0 kubenswrapper[8244]: I0318 09:55:09.897047 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-node-pullsecrets\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897052 master-0 kubenswrapper[8244]: I0318 09:55:09.897064 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit-dir\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897364 master-0 kubenswrapper[8244]: I0318 09:55:09.897093 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-serving-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897364 master-0 kubenswrapper[8244]: I0318 09:55:09.897147 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit-dir\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897364 master-0 kubenswrapper[8244]: I0318 09:55:09.897278 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-client-ca\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.897364 master-0 kubenswrapper[8244]: I0318 09:55:09.897351 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-trusted-ca-bundle\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897525 master-0 kubenswrapper[8244]: I0318 09:55:09.897403 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-image-import-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897525 master-0 kubenswrapper[8244]: I0318 09:55:09.897434 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-client\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897525 master-0 kubenswrapper[8244]: I0318 09:55:09.897471 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-encryption-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897653 master-0 kubenswrapper[8244]: I0318 09:55:09.897527 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-serving-cert\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897653 master-0 kubenswrapper[8244]: I0318 09:55:09.897580 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26snd\" (UniqueName: \"kubernetes.io/projected/9f251e1b-0f5d-460f-8152-c9201dba0cff-kube-api-access-26snd\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.897653 master-0 kubenswrapper[8244]: I0318 09:55:09.897615 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f251e1b-0f5d-460f-8152-c9201dba0cff-serving-cert\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.897653 master-0 kubenswrapper[8244]: I0318 09:55:09.897640 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897810 master-0 kubenswrapper[8244]: I0318 09:55:09.897671 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-549bq\" (UniqueName: \"kubernetes.io/projected/0c7b317c-d141-4e69-9c82-4a5dda6c3248-kube-api-access-549bq\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.897810 master-0 kubenswrapper[8244]: I0318 09:55:09.897695 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.898065 master-0 kubenswrapper[8244]: I0318 09:55:09.897996 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-serving-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.898665 master-0 kubenswrapper[8244]: I0318 09:55:09.898226 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-trusted-ca-bundle\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.898665 master-0 kubenswrapper[8244]: I0318 09:55:09.898510 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-client-ca\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.898665 master-0 kubenswrapper[8244]: I0318 09:55:09.898516 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.898665 master-0 kubenswrapper[8244]: I0318 09:55:09.898596 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-node-pullsecrets\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.899031 master-0 kubenswrapper[8244]: I0318 09:55:09.898891 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.899625 master-0 kubenswrapper[8244]: I0318 09:55:09.899609 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-config\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.901944 master-0 kubenswrapper[8244]: I0318 09:55:09.901909 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-encryption-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.902075 master-0 kubenswrapper[8244]: I0318 09:55:09.901960 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-image-import-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.905058 master-0 kubenswrapper[8244]: I0318 09:55:09.905021 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-serving-cert\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.906099 master-0 kubenswrapper[8244]: I0318 09:55:09.906075 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f251e1b-0f5d-460f-8152-c9201dba0cff-serving-cert\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:09.908199 master-0 kubenswrapper[8244]: I0318 09:55:09.908168 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-client\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.916095 master-0 kubenswrapper[8244]: I0318 09:55:09.916050 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-549bq\" (UniqueName: \"kubernetes.io/projected/0c7b317c-d141-4e69-9c82-4a5dda6c3248-kube-api-access-549bq\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:09.917303 master-0 kubenswrapper[8244]: I0318 09:55:09.917264 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26snd\" (UniqueName: \"kubernetes.io/projected/9f251e1b-0f5d-460f-8152-c9201dba0cff-kube-api-access-26snd\") pod \"route-controller-manager-664bd974c9-7w9f6\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:10.075578 master-0 kubenswrapper[8244]: I0318 09:55:10.075529 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:10.087707 master-0 kubenswrapper[8244]: I0318 09:55:10.087667 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:10.228258 master-0 kubenswrapper[8244]: I0318 09:55:10.228213 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" event={"ID":"84630bf5-3d03-48ec-9b0c-34034f6181d4","Type":"ContainerStarted","Data":"9ee500d397f055af42d968375a2a159609fc36101c1c2b378af379b040c4ff55"} Mar 18 09:55:12.115094 master-0 kubenswrapper[8244]: I0318 09:55:12.115024 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:55:12.115980 master-0 kubenswrapper[8244]: I0318 09:55:12.115258 8244 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:55:12.137426 master-0 kubenswrapper[8244]: I0318 09:55:12.137342 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 09:55:12.978859 master-0 kubenswrapper[8244]: W0318 09:55:12.978331 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d3cf68_ed97_45b9_8c83_b42bb1f789fc.slice/crio-cc6e82f62809390e77afef9a24511f8204b584c9c34f5174bf13a9f3c743fa58 WatchSource:0}: Error finding container cc6e82f62809390e77afef9a24511f8204b584c9c34f5174bf13a9f3c743fa58: Status 404 returned error can't find the container with id cc6e82f62809390e77afef9a24511f8204b584c9c34f5174bf13a9f3c743fa58 Mar 18 09:55:13.144613 master-0 kubenswrapper[8244]: I0318 09:55:13.144259 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z9sf5"] Mar 18 09:55:13.241370 master-0 kubenswrapper[8244]: I0318 09:55:13.241230 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hjpz8" event={"ID":"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc","Type":"ContainerStarted","Data":"cc6e82f62809390e77afef9a24511f8204b584c9c34f5174bf13a9f3c743fa58"} Mar 18 09:55:14.246086 master-0 kubenswrapper[8244]: I0318 09:55:14.245879 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z9sf5" event={"ID":"da04c6fa-4916-4bed-a6b2-cc92bf2ee379","Type":"ContainerStarted","Data":"0a14d09c0c63bc07a9e3f986358b6bbfe11d33fdfadd6b5aba6cb62ef0a527b0"} Mar 18 09:55:14.292185 master-0 kubenswrapper[8244]: I0318 09:55:14.289962 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln"] Mar 18 09:55:14.292185 master-0 kubenswrapper[8244]: I0318 09:55:14.291542 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-687747fbb4-k7dnf"] Mar 18 09:55:14.312565 master-0 kubenswrapper[8244]: I0318 09:55:14.312505 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6"] Mar 18 09:55:14.315161 master-0 kubenswrapper[8244]: W0318 09:55:14.315109 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b906fc0_f2bf_4586_97e6_921bbd467b65.slice/crio-1b4d46c0a582fa8416fadc519a245d9a05f81263579189dfddab63cae5612499 WatchSource:0}: Error finding container 1b4d46c0a582fa8416fadc519a245d9a05f81263579189dfddab63cae5612499: Status 404 returned error can't find the container with id 1b4d46c0a582fa8416fadc519a245d9a05f81263579189dfddab63cae5612499 Mar 18 09:55:14.337196 master-0 kubenswrapper[8244]: W0318 09:55:14.337122 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f251e1b_0f5d_460f_8152_c9201dba0cff.slice/crio-b9e56ee47991b15430eb39589af396f99ed92f11ae0487ffd43e56616df53489 WatchSource:0}: Error finding container b9e56ee47991b15430eb39589af396f99ed92f11ae0487ffd43e56616df53489: Status 404 returned error can't find the container with id b9e56ee47991b15430eb39589af396f99ed92f11ae0487ffd43e56616df53489 Mar 18 09:55:14.804660 master-0 kubenswrapper[8244]: I0318 09:55:14.804447 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-6rhgt"] Mar 18 09:55:14.805182 master-0 kubenswrapper[8244]: I0318 09:55:14.805155 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867199 master-0 kubenswrapper[8244]: I0318 09:55:14.867103 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77j8\" (UniqueName: \"kubernetes.io/projected/b0f77d68-f228-4f82-befb-fb2a2ce2e976-kube-api-access-t77j8\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867199 master-0 kubenswrapper[8244]: I0318 09:55:14.867159 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-tuned\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867447 master-0 kubenswrapper[8244]: I0318 09:55:14.867350 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysconfig\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867447 master-0 kubenswrapper[8244]: I0318 09:55:14.867414 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-lib-modules\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867510 master-0 kubenswrapper[8244]: I0318 09:55:14.867456 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867510 master-0 kubenswrapper[8244]: I0318 09:55:14.867492 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-host\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867573 master-0 kubenswrapper[8244]: I0318 09:55:14.867520 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-systemd\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867573 master-0 kubenswrapper[8244]: I0318 09:55:14.867548 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-var-lib-kubelet\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867629 master-0 kubenswrapper[8244]: I0318 09:55:14.867577 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-conf\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867629 master-0 kubenswrapper[8244]: I0318 09:55:14.867611 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-run\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867690 master-0 kubenswrapper[8244]: I0318 09:55:14.867635 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-kubernetes\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867690 master-0 kubenswrapper[8244]: I0318 09:55:14.867657 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-tmp\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867750 master-0 kubenswrapper[8244]: I0318 09:55:14.867688 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-sys\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.867750 master-0 kubenswrapper[8244]: I0318 09:55:14.867719 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-modprobe-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.968569 master-0 kubenswrapper[8244]: I0318 09:55:14.968515 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-kubernetes\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.968569 master-0 kubenswrapper[8244]: I0318 09:55:14.968568 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-tmp\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.968899 master-0 kubenswrapper[8244]: I0318 09:55:14.968630 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-modprobe-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.968899 master-0 kubenswrapper[8244]: I0318 09:55:14.968688 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-sys\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.968899 master-0 kubenswrapper[8244]: I0318 09:55:14.968708 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-kubernetes\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.968899 master-0 kubenswrapper[8244]: I0318 09:55:14.968798 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t77j8\" (UniqueName: \"kubernetes.io/projected/b0f77d68-f228-4f82-befb-fb2a2ce2e976-kube-api-access-t77j8\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.968899 master-0 kubenswrapper[8244]: I0318 09:55:14.968850 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-modprobe-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.968899 master-0 kubenswrapper[8244]: I0318 09:55:14.968876 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-sys\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969161 master-0 kubenswrapper[8244]: I0318 09:55:14.968924 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-tuned\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969161 master-0 kubenswrapper[8244]: I0318 09:55:14.968966 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysconfig\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969161 master-0 kubenswrapper[8244]: I0318 09:55:14.969000 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-lib-modules\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969161 master-0 kubenswrapper[8244]: I0318 09:55:14.969033 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969161 master-0 kubenswrapper[8244]: I0318 09:55:14.969058 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-host\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969161 master-0 kubenswrapper[8244]: I0318 09:55:14.969082 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-systemd\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969161 master-0 kubenswrapper[8244]: I0318 09:55:14.969101 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-var-lib-kubelet\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969161 master-0 kubenswrapper[8244]: I0318 09:55:14.969123 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-conf\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969161 master-0 kubenswrapper[8244]: I0318 09:55:14.969143 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-run\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969589 master-0 kubenswrapper[8244]: I0318 09:55:14.969187 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969589 master-0 kubenswrapper[8244]: I0318 09:55:14.969231 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-host\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969589 master-0 kubenswrapper[8244]: I0318 09:55:14.969236 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysconfig\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969589 master-0 kubenswrapper[8244]: I0318 09:55:14.969450 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-lib-modules\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969589 master-0 kubenswrapper[8244]: I0318 09:55:14.969498 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-var-lib-kubelet\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969589 master-0 kubenswrapper[8244]: I0318 09:55:14.969537 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-systemd\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969809 master-0 kubenswrapper[8244]: I0318 09:55:14.969660 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-conf\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.969809 master-0 kubenswrapper[8244]: I0318 09:55:14.969663 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-run\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.973411 master-0 kubenswrapper[8244]: I0318 09:55:14.973381 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-tmp\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.973810 master-0 kubenswrapper[8244]: I0318 09:55:14.973737 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-tuned\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:14.978205 master-0 kubenswrapper[8244]: I0318 09:55:14.978151 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 09:55:14.978434 master-0 kubenswrapper[8244]: I0318 09:55:14.978390 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="2b86644b-ddbd-4b14-b82d-b7d614f7f81e" containerName="installer" containerID="cri-o://826610ccc7ba64519b97c82e3e527d6dc4e2a131529f71a75f5c480a046f7aa6" gracePeriod=30 Mar 18 09:55:14.992565 master-0 kubenswrapper[8244]: I0318 09:55:14.992508 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t77j8\" (UniqueName: \"kubernetes.io/projected/b0f77d68-f228-4f82-befb-fb2a2ce2e976-kube-api-access-t77j8\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:15.130444 master-0 kubenswrapper[8244]: I0318 09:55:15.130308 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 09:55:15.150859 master-0 kubenswrapper[8244]: W0318 09:55:15.150791 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0f77d68_f228_4f82_befb_fb2a2ce2e976.slice/crio-314f7c80f05c68da161ac362126e7420808268c6c3e4c4db05d0a138683db079 WatchSource:0}: Error finding container 314f7c80f05c68da161ac362126e7420808268c6c3e4c4db05d0a138683db079: Status 404 returned error can't find the container with id 314f7c80f05c68da161ac362126e7420808268c6c3e4c4db05d0a138683db079 Mar 18 09:55:15.250056 master-0 kubenswrapper[8244]: I0318 09:55:15.249761 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" event={"ID":"9f251e1b-0f5d-460f-8152-c9201dba0cff","Type":"ContainerStarted","Data":"b9e56ee47991b15430eb39589af396f99ed92f11ae0487ffd43e56616df53489"} Mar 18 09:55:15.251296 master-0 kubenswrapper[8244]: I0318 09:55:15.251270 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" event={"ID":"8ee99294-4785-49d0-b493-0d734cf09396","Type":"ContainerStarted","Data":"9f8d2fc41a698996d2e8d108e6acdc91bab1b3eba85194b567c7b7ad7a300279"} Mar 18 09:55:15.253229 master-0 kubenswrapper[8244]: I0318 09:55:15.253178 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" event={"ID":"c2635254-a491-42e5-b598-461c24bf77ca","Type":"ContainerStarted","Data":"c59a5fbf874d40b4d6dbdabc263d54ba8033378f9b3eccda436cb84f154d827b"} Mar 18 09:55:15.254765 master-0 kubenswrapper[8244]: I0318 09:55:15.254736 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hjpz8" event={"ID":"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc","Type":"ContainerStarted","Data":"382d42d8bcf4f3e384408955ff6a1f34f75ca9f7986c7713b47b029ca19ad22c"} Mar 18 09:55:15.256142 master-0 kubenswrapper[8244]: I0318 09:55:15.256117 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" event={"ID":"8b906fc0-f2bf-4586-97e6-921bbd467b65","Type":"ContainerStarted","Data":"1b4d46c0a582fa8416fadc519a245d9a05f81263579189dfddab63cae5612499"} Mar 18 09:55:15.257314 master-0 kubenswrapper[8244]: I0318 09:55:15.257280 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerStarted","Data":"05cff4221243ebd7ae26153c89c1fdb47cc0832d0d0994dde3cdc704bcd74a3b"} Mar 18 09:55:15.257375 master-0 kubenswrapper[8244]: I0318 09:55:15.257318 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerStarted","Data":"206825c3b2d516109311b9ec6547c75a5e9979c7b55c567cf556284de0799148"} Mar 18 09:55:15.262889 master-0 kubenswrapper[8244]: I0318 09:55:15.262852 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" event={"ID":"b0f77d68-f228-4f82-befb-fb2a2ce2e976","Type":"ContainerStarted","Data":"314f7c80f05c68da161ac362126e7420808268c6c3e4c4db05d0a138683db079"} Mar 18 09:55:15.264138 master-0 kubenswrapper[8244]: I0318 09:55:15.264114 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" event={"ID":"84630bf5-3d03-48ec-9b0c-34034f6181d4","Type":"ContainerStarted","Data":"e1957d98f39301dce1b4014ac7221a19d0b47e1c246671e7f43baf1f91c41c68"} Mar 18 09:55:15.264965 master-0 kubenswrapper[8244]: I0318 09:55:15.264940 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:15.268862 master-0 kubenswrapper[8244]: I0318 09:55:15.266729 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" event={"ID":"0c7b317c-d141-4e69-9c82-4a5dda6c3248","Type":"ContainerStarted","Data":"03c65d78c2c86aff78c560583deceefc749227ea76cab522d93c1dd2064cc015"} Mar 18 09:55:15.273857 master-0 kubenswrapper[8244]: I0318 09:55:15.272561 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:15.342626 master-0 kubenswrapper[8244]: I0318 09:55:15.342109 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" podStartSLOduration=5.239321144 podStartE2EDuration="10.342085034s" podCreationTimestamp="2026-03-18 09:55:05 +0000 UTC" firstStartedPulling="2026-03-18 09:55:09.240041367 +0000 UTC m=+25.719777495" lastFinishedPulling="2026-03-18 09:55:14.342805257 +0000 UTC m=+30.822541385" observedRunningTime="2026-03-18 09:55:15.34110136 +0000 UTC m=+31.820837488" watchObservedRunningTime="2026-03-18 09:55:15.342085034 +0000 UTC m=+31.821821162" Mar 18 09:55:15.355858 master-0 kubenswrapper[8244]: I0318 09:55:15.355113 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hjpz8" podStartSLOduration=8.355095077 podStartE2EDuration="8.355095077s" podCreationTimestamp="2026-03-18 09:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:15.353630242 +0000 UTC m=+31.833366370" watchObservedRunningTime="2026-03-18 09:55:15.355095077 +0000 UTC m=+31.834831205" Mar 18 09:55:15.635471 master-0 kubenswrapper[8244]: I0318 09:55:15.632722 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw"] Mar 18 09:55:15.635471 master-0 kubenswrapper[8244]: I0318 09:55:15.633568 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.647467 master-0 kubenswrapper[8244]: I0318 09:55:15.643948 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 09:55:15.651859 master-0 kubenswrapper[8244]: I0318 09:55:15.649782 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw"] Mar 18 09:55:15.651859 master-0 kubenswrapper[8244]: I0318 09:55:15.650027 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 09:55:15.657331 master-0 kubenswrapper[8244]: I0318 09:55:15.655838 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 09:55:15.662854 master-0 kubenswrapper[8244]: I0318 09:55:15.661561 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 09:55:15.686849 master-0 kubenswrapper[8244]: I0318 09:55:15.683557 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.686849 master-0 kubenswrapper[8244]: I0318 09:55:15.683627 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-cache\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.686849 master-0 kubenswrapper[8244]: I0318 09:55:15.683677 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.686849 master-0 kubenswrapper[8244]: I0318 09:55:15.683708 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.686849 master-0 kubenswrapper[8244]: I0318 09:55:15.683741 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxl7x\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-kube-api-access-kxl7x\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.686849 master-0 kubenswrapper[8244]: I0318 09:55:15.683770 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.785456 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.785541 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-cache\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.785853 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.786261 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-cache\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.786518 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.786560 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxl7x\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-kube-api-access-kxl7x\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.786597 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.786685 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.786754 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.790184 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.805951 master-0 kubenswrapper[8244]: I0318 09:55:15.795600 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.857366 master-0 kubenswrapper[8244]: I0318 09:55:15.855785 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q"] Mar 18 09:55:15.857366 master-0 kubenswrapper[8244]: I0318 09:55:15.856564 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:15.869896 master-0 kubenswrapper[8244]: I0318 09:55:15.869561 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 09:55:15.869896 master-0 kubenswrapper[8244]: I0318 09:55:15.869803 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 09:55:15.872736 master-0 kubenswrapper[8244]: I0318 09:55:15.872660 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 09:55:15.880167 master-0 kubenswrapper[8244]: I0318 09:55:15.880119 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxl7x\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-kube-api-access-kxl7x\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:15.989184 master-0 kubenswrapper[8244]: I0318 09:55:15.988925 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:15.989184 master-0 kubenswrapper[8244]: I0318 09:55:15.989017 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6948f93-b573-4f09-b754-aaa2269e2875-cache\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:15.989184 master-0 kubenswrapper[8244]: I0318 09:55:15.989044 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:15.989184 master-0 kubenswrapper[8244]: I0318 09:55:15.989164 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:15.989655 master-0 kubenswrapper[8244]: I0318 09:55:15.989227 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2g9q\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-kube-api-access-t2g9q\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.015196 master-0 kubenswrapper[8244]: I0318 09:55:16.015095 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:16.090971 master-0 kubenswrapper[8244]: I0318 09:55:16.090613 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.091314 master-0 kubenswrapper[8244]: I0318 09:55:16.090994 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.091314 master-0 kubenswrapper[8244]: I0318 09:55:16.091017 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2g9q\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-kube-api-access-t2g9q\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.091314 master-0 kubenswrapper[8244]: I0318 09:55:16.091205 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.091314 master-0 kubenswrapper[8244]: I0318 09:55:16.091277 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6948f93-b573-4f09-b754-aaa2269e2875-cache\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.091683 master-0 kubenswrapper[8244]: I0318 09:55:16.091324 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.091989 master-0 kubenswrapper[8244]: I0318 09:55:16.091925 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.092483 master-0 kubenswrapper[8244]: I0318 09:55:16.092317 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6948f93-b573-4f09-b754-aaa2269e2875-cache\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.097934 master-0 kubenswrapper[8244]: I0318 09:55:16.096700 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:16.358541 master-0 kubenswrapper[8244]: I0318 09:55:16.358397 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" event={"ID":"b0f77d68-f228-4f82-befb-fb2a2ce2e976","Type":"ContainerStarted","Data":"76c33a4c994ae604d5a2b0cd0442a4834cd611c9b140b05c738310b87fd20129"} Mar 18 09:55:16.784920 master-0 kubenswrapper[8244]: I0318 09:55:16.780896 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:55:16.784920 master-0 kubenswrapper[8244]: I0318 09:55:16.780992 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:55:16.784920 master-0 kubenswrapper[8244]: I0318 09:55:16.781020 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:55:16.784920 master-0 kubenswrapper[8244]: I0318 09:55:16.781087 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:55:16.784920 master-0 kubenswrapper[8244]: I0318 09:55:16.781195 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:55:16.784920 master-0 kubenswrapper[8244]: I0318 09:55:16.781292 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:55:16.784920 master-0 kubenswrapper[8244]: I0318 09:55:16.781346 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:55:16.786820 master-0 kubenswrapper[8244]: I0318 09:55:16.785706 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:55:16.789638 master-0 kubenswrapper[8244]: I0318 09:55:16.789045 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:55:16.789638 master-0 kubenswrapper[8244]: I0318 09:55:16.789384 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:55:16.790257 master-0 kubenswrapper[8244]: I0318 09:55:16.790143 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:55:16.790257 master-0 kubenswrapper[8244]: I0318 09:55:16.790223 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:55:16.790508 master-0 kubenswrapper[8244]: I0318 09:55:16.790480 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-hkzr2\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:55:16.790725 master-0 kubenswrapper[8244]: I0318 09:55:16.790704 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:55:16.790814 master-0 kubenswrapper[8244]: I0318 09:55:16.790780 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:55:16.791122 master-0 kubenswrapper[8244]: I0318 09:55:16.791101 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:55:16.792195 master-0 kubenswrapper[8244]: I0318 09:55:16.791318 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 09:55:16.792195 master-0 kubenswrapper[8244]: I0318 09:55:16.792134 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:55:16.801163 master-0 kubenswrapper[8244]: I0318 09:55:16.800974 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 09:55:16.801163 master-0 kubenswrapper[8244]: I0318 09:55:16.801003 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 09:55:17.092214 master-0 kubenswrapper[8244]: I0318 09:55:17.091876 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:55:17.239848 master-0 kubenswrapper[8244]: I0318 09:55:17.239155 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q"] Mar 18 09:55:17.290751 master-0 kubenswrapper[8244]: I0318 09:55:17.290660 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw"] Mar 18 09:55:17.355975 master-0 kubenswrapper[8244]: I0318 09:55:17.342423 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2g9q\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-kube-api-access-t2g9q\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:17.363966 master-0 kubenswrapper[8244]: W0318 09:55:17.359020 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0876d14e_1fbe_4c09_b4eb_e3d2eb14ab3a.slice/crio-00431ec658bea7a97a4c1df198c67f87ad4685fb77cc89ae90150ff213743316 WatchSource:0}: Error finding container 00431ec658bea7a97a4c1df198c67f87ad4685fb77cc89ae90150ff213743316: Status 404 returned error can't find the container with id 00431ec658bea7a97a4c1df198c67f87ad4685fb77cc89ae90150ff213743316 Mar 18 09:55:17.380835 master-0 kubenswrapper[8244]: I0318 09:55:17.380622 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:17.530477 master-0 kubenswrapper[8244]: I0318 09:55:17.530403 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" podStartSLOduration=3.530373545 podStartE2EDuration="3.530373545s" podCreationTimestamp="2026-03-18 09:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:17.374027994 +0000 UTC m=+33.853764122" watchObservedRunningTime="2026-03-18 09:55:17.530373545 +0000 UTC m=+34.010109673" Mar 18 09:55:17.532493 master-0 kubenswrapper[8244]: I0318 09:55:17.532455 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tbxt4"] Mar 18 09:55:17.532975 master-0 kubenswrapper[8244]: I0318 09:55:17.532948 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv"] Mar 18 09:55:17.547126 master-0 kubenswrapper[8244]: I0318 09:55:17.546496 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2"] Mar 18 09:55:17.566859 master-0 kubenswrapper[8244]: W0318 09:55:17.566806 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca4a0040_a638_46fa_a1cb_a19d83a7ebe4.slice/crio-09d710db13d778dbf9177c53bdd0bf416b054e571b3f82d139455ca7c45869a9 WatchSource:0}: Error finding container 09d710db13d778dbf9177c53bdd0bf416b054e571b3f82d139455ca7c45869a9: Status 404 returned error can't find the container with id 09d710db13d778dbf9177c53bdd0bf416b054e571b3f82d139455ca7c45869a9 Mar 18 09:55:17.585160 master-0 kubenswrapper[8244]: I0318 09:55:17.584673 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-2glpv"] Mar 18 09:55:17.656397 master-0 kubenswrapper[8244]: I0318 09:55:17.656080 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k"] Mar 18 09:55:17.656456 master-0 kubenswrapper[8244]: I0318 09:55:17.656415 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s"] Mar 18 09:55:17.666592 master-0 kubenswrapper[8244]: W0318 09:55:17.666527 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee376320_9ca0_444d_ab37_9cbcb6729b11.slice/crio-860dad91b3226c9023c3b60395b0ad953648fc93c4b425a376a5054813858ced WatchSource:0}: Error finding container 860dad91b3226c9023c3b60395b0ad953648fc93c4b425a376a5054813858ced: Status 404 returned error can't find the container with id 860dad91b3226c9023c3b60395b0ad953648fc93c4b425a376a5054813858ced Mar 18 09:55:17.673530 master-0 kubenswrapper[8244]: I0318 09:55:17.673278 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m"] Mar 18 09:55:17.691214 master-0 kubenswrapper[8244]: I0318 09:55:17.682057 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q"] Mar 18 09:55:17.982168 master-0 kubenswrapper[8244]: I0318 09:55:17.981938 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 09:55:17.982498 master-0 kubenswrapper[8244]: I0318 09:55:17.982480 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.004502 master-0 kubenswrapper[8244]: I0318 09:55:18.004442 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 09:55:18.117807 master-0 kubenswrapper[8244]: I0318 09:55:18.117746 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.117980 master-0 kubenswrapper[8244]: I0318 09:55:18.117835 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c4bc848-8103-45a9-acfd-59bc686bea98-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.117980 master-0 kubenswrapper[8244]: I0318 09:55:18.117883 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-var-lock\") pod \"installer-2-master-0\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.218815 master-0 kubenswrapper[8244]: I0318 09:55:18.218765 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.219049 master-0 kubenswrapper[8244]: I0318 09:55:18.218842 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c4bc848-8103-45a9-acfd-59bc686bea98-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.219049 master-0 kubenswrapper[8244]: I0318 09:55:18.218882 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-var-lock\") pod \"installer-2-master-0\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.219049 master-0 kubenswrapper[8244]: I0318 09:55:18.218995 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.219197 master-0 kubenswrapper[8244]: I0318 09:55:18.219159 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-var-lock\") pod \"installer-2-master-0\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.377694 master-0 kubenswrapper[8244]: I0318 09:55:18.377649 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tbxt4" event={"ID":"0442ec6c-5973-40a5-a0c3-dc02de46d343","Type":"ContainerStarted","Data":"9a9d18e78a09ff29603fbd5fc9e03f2d3a2eb3c0cb4954994f17a7962e1ccc72"} Mar 18 09:55:18.378981 master-0 kubenswrapper[8244]: I0318 09:55:18.378943 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" event={"ID":"ee376320-9ca0-444d-ab37-9cbcb6729b11","Type":"ContainerStarted","Data":"860dad91b3226c9023c3b60395b0ad953648fc93c4b425a376a5054813858ced"} Mar 18 09:55:18.380514 master-0 kubenswrapper[8244]: I0318 09:55:18.380456 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" event={"ID":"6f266bad-8b30-4300-ad93-9d48e61f2440","Type":"ContainerStarted","Data":"2cf9d5a318f253e886267d57345deb8cc4469309552817e3d629697b159e40e7"} Mar 18 09:55:18.382146 master-0 kubenswrapper[8244]: I0318 09:55:18.382078 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" event={"ID":"db52ca42-e458-407f-9eeb-bf6de6405edc","Type":"ContainerStarted","Data":"61f6b81b92e4d6e8441e143173fb9e75d890f0b6176d5db04fc0f47c9e7e489a"} Mar 18 09:55:18.384142 master-0 kubenswrapper[8244]: I0318 09:55:18.384116 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" event={"ID":"d4d2218c-f9df-4d43-8727-ed3a920e23f7","Type":"ContainerStarted","Data":"3a53157cb4f5ca523490699f8170c5e269104888d22a770d6e8d25d868db3675"} Mar 18 09:55:18.384211 master-0 kubenswrapper[8244]: I0318 09:55:18.384150 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" event={"ID":"d4d2218c-f9df-4d43-8727-ed3a920e23f7","Type":"ContainerStarted","Data":"2108f9b19bef72325cf7ce6838f94c4d93335d1acb2849349c2da5bf81571c7d"} Mar 18 09:55:18.385665 master-0 kubenswrapper[8244]: I0318 09:55:18.385635 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" event={"ID":"f69a00b6-d908-4485-bb0d-57594fc01d24","Type":"ContainerStarted","Data":"1d7c06dbc8e2f887f2a21bc3e179a21693ddc1835812120917fd3ac94d4f0ff2"} Mar 18 09:55:18.387299 master-0 kubenswrapper[8244]: I0318 09:55:18.387257 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" event={"ID":"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4","Type":"ContainerStarted","Data":"09d710db13d778dbf9177c53bdd0bf416b054e571b3f82d139455ca7c45869a9"} Mar 18 09:55:18.388707 master-0 kubenswrapper[8244]: I0318 09:55:18.388669 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" event={"ID":"b6948f93-b573-4f09-b754-aaa2269e2875","Type":"ContainerStarted","Data":"7a73a7304ad52748de231e8de0dd60f0f62a95ba031328669ed0ac946a01de35"} Mar 18 09:55:18.388761 master-0 kubenswrapper[8244]: I0318 09:55:18.388708 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" event={"ID":"b6948f93-b573-4f09-b754-aaa2269e2875","Type":"ContainerStarted","Data":"6b37b06bafa3fe7617d0c4d370f2bc9e1e4e31111091703de1b10d8a3711bfba"} Mar 18 09:55:18.390776 master-0 kubenswrapper[8244]: I0318 09:55:18.390751 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" event={"ID":"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a","Type":"ContainerStarted","Data":"89f9d8c31d719734af3431b3cec84aa03bf298440dd062c3328c469e4d1b49bb"} Mar 18 09:55:18.390842 master-0 kubenswrapper[8244]: I0318 09:55:18.390778 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" event={"ID":"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a","Type":"ContainerStarted","Data":"9a21ae84dd6f6fc61983ad5121c7f803c35901cfcabaa09c77339386251dbb3c"} Mar 18 09:55:18.390842 master-0 kubenswrapper[8244]: I0318 09:55:18.390788 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" event={"ID":"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a","Type":"ContainerStarted","Data":"00431ec658bea7a97a4c1df198c67f87ad4685fb77cc89ae90150ff213743316"} Mar 18 09:55:18.390981 master-0 kubenswrapper[8244]: I0318 09:55:18.390963 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:18.577112 master-0 kubenswrapper[8244]: I0318 09:55:18.575558 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" podStartSLOduration=3.575520406 podStartE2EDuration="3.575520406s" podCreationTimestamp="2026-03-18 09:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:18.575035944 +0000 UTC m=+35.054772072" watchObservedRunningTime="2026-03-18 09:55:18.575520406 +0000 UTC m=+35.055256564" Mar 18 09:55:18.608511 master-0 kubenswrapper[8244]: I0318 09:55:18.608430 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c4bc848-8103-45a9-acfd-59bc686bea98-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:18.615530 master-0 kubenswrapper[8244]: I0318 09:55:18.613998 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:19.179879 master-0 kubenswrapper[8244]: I0318 09:55:19.179691 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 09:55:19.409872 master-0 kubenswrapper[8244]: I0318 09:55:19.409811 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" event={"ID":"b6948f93-b573-4f09-b754-aaa2269e2875","Type":"ContainerStarted","Data":"689ec244ea71c8da5c6e5904e94219a9475163e217c6adeb1a47361e24bc7c3d"} Mar 18 09:55:19.409872 master-0 kubenswrapper[8244]: I0318 09:55:19.409877 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:19.423900 master-0 kubenswrapper[8244]: I0318 09:55:19.423450 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" podStartSLOduration=4.423429023 podStartE2EDuration="4.423429023s" podCreationTimestamp="2026-03-18 09:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:19.421776713 +0000 UTC m=+35.901512861" watchObservedRunningTime="2026-03-18 09:55:19.423429023 +0000 UTC m=+35.903165151" Mar 18 09:55:19.765416 master-0 kubenswrapper[8244]: I0318 09:55:19.763707 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr"] Mar 18 09:55:19.765416 master-0 kubenswrapper[8244]: I0318 09:55:19.763967 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" podUID="15f8941b-dba2-40ba-86d5-3318f5b635cc" containerName="cluster-version-operator" containerID="cri-o://3240a480121627439aed1343343e4db9fb31cb5c32e8ae0ecc6751df89afe086" gracePeriod=130 Mar 18 09:55:20.582070 master-0 kubenswrapper[8244]: I0318 09:55:20.582017 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 09:55:20.582558 master-0 kubenswrapper[8244]: I0318 09:55:20.582528 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.586168 master-0 kubenswrapper[8244]: I0318 09:55:20.586139 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 09:55:20.611673 master-0 kubenswrapper[8244]: I0318 09:55:20.611620 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 09:55:20.660787 master-0 kubenswrapper[8244]: I0318 09:55:20.660719 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-var-lock\") pod \"installer-1-master-0\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.660787 master-0 kubenswrapper[8244]: I0318 09:55:20.660770 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.660787 master-0 kubenswrapper[8244]: I0318 09:55:20.660788 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fb70bf3-93cd-4000-be1a-8e21846d5709-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.762635 master-0 kubenswrapper[8244]: I0318 09:55:20.762592 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-var-lock\") pod \"installer-1-master-0\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.762635 master-0 kubenswrapper[8244]: I0318 09:55:20.762641 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.762848 master-0 kubenswrapper[8244]: I0318 09:55:20.762658 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fb70bf3-93cd-4000-be1a-8e21846d5709-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.762848 master-0 kubenswrapper[8244]: I0318 09:55:20.762791 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.764071 master-0 kubenswrapper[8244]: I0318 09:55:20.763191 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-var-lock\") pod \"installer-1-master-0\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.793710 master-0 kubenswrapper[8244]: I0318 09:55:20.793649 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fb70bf3-93cd-4000-be1a-8e21846d5709-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:20.906072 master-0 kubenswrapper[8244]: I0318 09:55:20.905904 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:55:21.420415 master-0 kubenswrapper[8244]: I0318 09:55:21.420331 8244 generic.go:334] "Generic (PLEG): container finished" podID="15f8941b-dba2-40ba-86d5-3318f5b635cc" containerID="3240a480121627439aed1343343e4db9fb31cb5c32e8ae0ecc6751df89afe086" exitCode=0 Mar 18 09:55:21.420415 master-0 kubenswrapper[8244]: I0318 09:55:21.420384 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" event={"ID":"15f8941b-dba2-40ba-86d5-3318f5b635cc","Type":"ContainerDied","Data":"3240a480121627439aed1343343e4db9fb31cb5c32e8ae0ecc6751df89afe086"} Mar 18 09:55:21.904778 master-0 kubenswrapper[8244]: I0318 09:55:21.904570 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7667954bb7-xws4c"] Mar 18 09:55:21.905997 master-0 kubenswrapper[8244]: I0318 09:55:21.904866 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" podUID="84630bf5-3d03-48ec-9b0c-34034f6181d4" containerName="controller-manager" containerID="cri-o://e1957d98f39301dce1b4014ac7221a19d0b47e1c246671e7f43baf1f91c41c68" gracePeriod=30 Mar 18 09:55:21.926890 master-0 kubenswrapper[8244]: I0318 09:55:21.926042 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6"] Mar 18 09:55:22.038788 master-0 kubenswrapper[8244]: I0318 09:55:22.037449 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 09:55:22.038788 master-0 kubenswrapper[8244]: I0318 09:55:22.038007 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.040627 master-0 kubenswrapper[8244]: I0318 09:55:22.040592 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 09:55:22.046846 master-0 kubenswrapper[8244]: I0318 09:55:22.046179 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 09:55:22.180594 master-0 kubenswrapper[8244]: I0318 09:55:22.180476 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-var-lock\") pod \"installer-1-master-0\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.180594 master-0 kubenswrapper[8244]: I0318 09:55:22.180558 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.180852 master-0 kubenswrapper[8244]: I0318 09:55:22.180607 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.281738 master-0 kubenswrapper[8244]: I0318 09:55:22.281655 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.282064 master-0 kubenswrapper[8244]: I0318 09:55:22.281750 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-var-lock\") pod \"installer-1-master-0\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.282064 master-0 kubenswrapper[8244]: I0318 09:55:22.281802 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.282064 master-0 kubenswrapper[8244]: I0318 09:55:22.281901 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.282177 master-0 kubenswrapper[8244]: I0318 09:55:22.282125 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-var-lock\") pod \"installer-1-master-0\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.299584 master-0 kubenswrapper[8244]: I0318 09:55:22.299535 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:22.365749 master-0 kubenswrapper[8244]: I0318 09:55:22.365491 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:55:23.383689 master-0 kubenswrapper[8244]: I0318 09:55:23.378978 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 09:55:25.441888 master-0 kubenswrapper[8244]: I0318 09:55:25.441754 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"8c4bc848-8103-45a9-acfd-59bc686bea98","Type":"ContainerStarted","Data":"70e61c4f746140f1e0f601d95aa3dc6fc0eab6a01893679e1752c83571364168"} Mar 18 09:55:25.580240 master-0 kubenswrapper[8244]: I0318 09:55:25.580153 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 09:55:25.580946 master-0 kubenswrapper[8244]: I0318 09:55:25.580909 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.587536 master-0 kubenswrapper[8244]: I0318 09:55:25.587481 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 09:55:25.618727 master-0 kubenswrapper[8244]: I0318 09:55:25.618517 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.618727 master-0 kubenswrapper[8244]: I0318 09:55:25.618590 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-var-lock\") pod \"installer-3-master-0\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.618727 master-0 kubenswrapper[8244]: I0318 09:55:25.618626 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef9dd029-9f8c-4f55-806b-e08ecd088607-kube-api-access\") pod \"installer-3-master-0\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.719886 master-0 kubenswrapper[8244]: I0318 09:55:25.719769 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.719886 master-0 kubenswrapper[8244]: I0318 09:55:25.719813 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-var-lock\") pod \"installer-3-master-0\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.719886 master-0 kubenswrapper[8244]: I0318 09:55:25.719855 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef9dd029-9f8c-4f55-806b-e08ecd088607-kube-api-access\") pod \"installer-3-master-0\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.720094 master-0 kubenswrapper[8244]: I0318 09:55:25.719952 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.720094 master-0 kubenswrapper[8244]: I0318 09:55:25.720036 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-var-lock\") pod \"installer-3-master-0\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.735780 master-0 kubenswrapper[8244]: I0318 09:55:25.735752 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef9dd029-9f8c-4f55-806b-e08ecd088607-kube-api-access\") pod \"installer-3-master-0\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:25.906579 master-0 kubenswrapper[8244]: I0318 09:55:25.906191 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:26.018630 master-0 kubenswrapper[8244]: I0318 09:55:26.018505 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 09:55:27.384626 master-0 kubenswrapper[8244]: I0318 09:55:27.384567 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 09:55:27.656290 master-0 kubenswrapper[8244]: I0318 09:55:27.656119 8244 patch_prober.go:28] interesting pod/controller-manager-7667954bb7-xws4c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.36:8443/healthz\": dial tcp 10.128.0.36:8443: connect: connection refused" start-of-body= Mar 18 09:55:27.656290 master-0 kubenswrapper[8244]: I0318 09:55:27.656193 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" podUID="84630bf5-3d03-48ec-9b0c-34034f6181d4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.36:8443/healthz\": dial tcp 10.128.0.36:8443: connect: connection refused" Mar 18 09:55:31.485726 master-0 kubenswrapper[8244]: I0318 09:55:31.485651 8244 generic.go:334] "Generic (PLEG): container finished" podID="84630bf5-3d03-48ec-9b0c-34034f6181d4" containerID="e1957d98f39301dce1b4014ac7221a19d0b47e1c246671e7f43baf1f91c41c68" exitCode=0 Mar 18 09:55:31.485726 master-0 kubenswrapper[8244]: I0318 09:55:31.485706 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" event={"ID":"84630bf5-3d03-48ec-9b0c-34034f6181d4","Type":"ContainerDied","Data":"e1957d98f39301dce1b4014ac7221a19d0b47e1c246671e7f43baf1f91c41c68"} Mar 18 09:55:31.814079 master-0 kubenswrapper[8244]: I0318 09:55:31.813845 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:55:31.904385 master-0 kubenswrapper[8244]: I0318 09:55:31.904273 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15f8941b-dba2-40ba-86d5-3318f5b635cc-kube-api-access\") pod \"15f8941b-dba2-40ba-86d5-3318f5b635cc\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " Mar 18 09:55:31.904385 master-0 kubenswrapper[8244]: I0318 09:55:31.904325 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-ssl-certs\") pod \"15f8941b-dba2-40ba-86d5-3318f5b635cc\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " Mar 18 09:55:31.904385 master-0 kubenswrapper[8244]: I0318 09:55:31.904432 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") pod \"15f8941b-dba2-40ba-86d5-3318f5b635cc\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " Mar 18 09:55:31.904385 master-0 kubenswrapper[8244]: I0318 09:55:31.904452 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15f8941b-dba2-40ba-86d5-3318f5b635cc-service-ca\") pod \"15f8941b-dba2-40ba-86d5-3318f5b635cc\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " Mar 18 09:55:31.904385 master-0 kubenswrapper[8244]: I0318 09:55:31.904497 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-cvo-updatepayloads\") pod \"15f8941b-dba2-40ba-86d5-3318f5b635cc\" (UID: \"15f8941b-dba2-40ba-86d5-3318f5b635cc\") " Mar 18 09:55:31.905640 master-0 kubenswrapper[8244]: I0318 09:55:31.904780 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "15f8941b-dba2-40ba-86d5-3318f5b635cc" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:31.906140 master-0 kubenswrapper[8244]: I0318 09:55:31.905964 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "15f8941b-dba2-40ba-86d5-3318f5b635cc" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:31.907346 master-0 kubenswrapper[8244]: I0318 09:55:31.907289 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f8941b-dba2-40ba-86d5-3318f5b635cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "15f8941b-dba2-40ba-86d5-3318f5b635cc" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:31.907965 master-0 kubenswrapper[8244]: I0318 09:55:31.907806 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f8941b-dba2-40ba-86d5-3318f5b635cc-service-ca" (OuterVolumeSpecName: "service-ca") pod "15f8941b-dba2-40ba-86d5-3318f5b635cc" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:31.910418 master-0 kubenswrapper[8244]: I0318 09:55:31.910357 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "15f8941b-dba2-40ba-86d5-3318f5b635cc" (UID: "15f8941b-dba2-40ba-86d5-3318f5b635cc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:55:31.926902 master-0 kubenswrapper[8244]: I0318 09:55:31.926369 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:32.005867 master-0 kubenswrapper[8244]: I0318 09:55:32.005800 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stt4j\" (UniqueName: \"kubernetes.io/projected/84630bf5-3d03-48ec-9b0c-34034f6181d4-kube-api-access-stt4j\") pod \"84630bf5-3d03-48ec-9b0c-34034f6181d4\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " Mar 18 09:55:32.005970 master-0 kubenswrapper[8244]: I0318 09:55:32.005873 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-client-ca\") pod \"84630bf5-3d03-48ec-9b0c-34034f6181d4\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " Mar 18 09:55:32.005970 master-0 kubenswrapper[8244]: I0318 09:55:32.005926 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-config\") pod \"84630bf5-3d03-48ec-9b0c-34034f6181d4\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " Mar 18 09:55:32.005970 master-0 kubenswrapper[8244]: I0318 09:55:32.005946 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-proxy-ca-bundles\") pod \"84630bf5-3d03-48ec-9b0c-34034f6181d4\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " Mar 18 09:55:32.006097 master-0 kubenswrapper[8244]: I0318 09:55:32.005994 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84630bf5-3d03-48ec-9b0c-34034f6181d4-serving-cert\") pod \"84630bf5-3d03-48ec-9b0c-34034f6181d4\" (UID: \"84630bf5-3d03-48ec-9b0c-34034f6181d4\") " Mar 18 09:55:32.006218 master-0 kubenswrapper[8244]: I0318 09:55:32.006185 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15f8941b-dba2-40ba-86d5-3318f5b635cc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.006218 master-0 kubenswrapper[8244]: I0318 09:55:32.006203 8244 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.006218 master-0 kubenswrapper[8244]: I0318 09:55:32.006213 8244 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15f8941b-dba2-40ba-86d5-3318f5b635cc-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.006218 master-0 kubenswrapper[8244]: I0318 09:55:32.006222 8244 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15f8941b-dba2-40ba-86d5-3318f5b635cc-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.007903 master-0 kubenswrapper[8244]: I0318 09:55:32.006231 8244 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15f8941b-dba2-40ba-86d5-3318f5b635cc-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.007903 master-0 kubenswrapper[8244]: I0318 09:55:32.007228 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "84630bf5-3d03-48ec-9b0c-34034f6181d4" (UID: "84630bf5-3d03-48ec-9b0c-34034f6181d4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:32.007903 master-0 kubenswrapper[8244]: I0318 09:55:32.007372 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-config" (OuterVolumeSpecName: "config") pod "84630bf5-3d03-48ec-9b0c-34034f6181d4" (UID: "84630bf5-3d03-48ec-9b0c-34034f6181d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:32.007903 master-0 kubenswrapper[8244]: I0318 09:55:32.007746 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "84630bf5-3d03-48ec-9b0c-34034f6181d4" (UID: "84630bf5-3d03-48ec-9b0c-34034f6181d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:32.015037 master-0 kubenswrapper[8244]: I0318 09:55:32.011085 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84630bf5-3d03-48ec-9b0c-34034f6181d4-kube-api-access-stt4j" (OuterVolumeSpecName: "kube-api-access-stt4j") pod "84630bf5-3d03-48ec-9b0c-34034f6181d4" (UID: "84630bf5-3d03-48ec-9b0c-34034f6181d4"). InnerVolumeSpecName "kube-api-access-stt4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:32.020601 master-0 kubenswrapper[8244]: I0318 09:55:32.020550 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84630bf5-3d03-48ec-9b0c-34034f6181d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84630bf5-3d03-48ec-9b0c-34034f6181d4" (UID: "84630bf5-3d03-48ec-9b0c-34034f6181d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:55:32.107379 master-0 kubenswrapper[8244]: I0318 09:55:32.107306 8244 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.107379 master-0 kubenswrapper[8244]: I0318 09:55:32.107352 8244 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.107379 master-0 kubenswrapper[8244]: I0318 09:55:32.107373 8244 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84630bf5-3d03-48ec-9b0c-34034f6181d4-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.107510 master-0 kubenswrapper[8244]: I0318 09:55:32.107390 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stt4j\" (UniqueName: \"kubernetes.io/projected/84630bf5-3d03-48ec-9b0c-34034f6181d4-kube-api-access-stt4j\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.107510 master-0 kubenswrapper[8244]: I0318 09:55:32.107406 8244 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84630bf5-3d03-48ec-9b0c-34034f6181d4-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:32.491988 master-0 kubenswrapper[8244]: I0318 09:55:32.491948 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" event={"ID":"f69a00b6-d908-4485-bb0d-57594fc01d24","Type":"ContainerStarted","Data":"1075c84bd38c3fce27905c1156a2ab4cc251fe93c9a2162f48deb544d115915f"} Mar 18 09:55:32.493642 master-0 kubenswrapper[8244]: I0318 09:55:32.493587 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" event={"ID":"15f8941b-dba2-40ba-86d5-3318f5b635cc","Type":"ContainerDied","Data":"fec9f6ce2363bfece5842c76139bc154b3ddbc4bb405022d03bffec1a7a4ae73"} Mar 18 09:55:32.493714 master-0 kubenswrapper[8244]: I0318 09:55:32.493650 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr" Mar 18 09:55:32.493792 master-0 kubenswrapper[8244]: I0318 09:55:32.493662 8244 scope.go:117] "RemoveContainer" containerID="3240a480121627439aed1343343e4db9fb31cb5c32e8ae0ecc6751df89afe086" Mar 18 09:55:32.496949 master-0 kubenswrapper[8244]: I0318 09:55:32.496870 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" event={"ID":"84630bf5-3d03-48ec-9b0c-34034f6181d4","Type":"ContainerDied","Data":"9ee500d397f055af42d968375a2a159609fc36101c1c2b378af379b040c4ff55"} Mar 18 09:55:32.497013 master-0 kubenswrapper[8244]: I0318 09:55:32.496959 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7667954bb7-xws4c" Mar 18 09:55:32.526128 master-0 kubenswrapper[8244]: I0318 09:55:32.517582 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 09:55:32.533175 master-0 kubenswrapper[8244]: I0318 09:55:32.529196 8244 scope.go:117] "RemoveContainer" containerID="e1957d98f39301dce1b4014ac7221a19d0b47e1c246671e7f43baf1f91c41c68" Mar 18 09:55:32.733297 master-0 kubenswrapper[8244]: I0318 09:55:32.732790 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 09:55:32.733297 master-0 kubenswrapper[8244]: I0318 09:55:32.732866 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 09:55:33.077901 master-0 kubenswrapper[8244]: I0318 09:55:33.073122 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7667954bb7-xws4c"] Mar 18 09:55:33.437768 master-0 kubenswrapper[8244]: I0318 09:55:33.437610 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7667954bb7-xws4c"] Mar 18 09:55:33.511739 master-0 kubenswrapper[8244]: I0318 09:55:33.511548 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00","Type":"ContainerStarted","Data":"933b2ad053b9c23c3a2342880b67f40c11f8fa3992eedba2b2625d8844c5e60c"} Mar 18 09:55:33.511739 master-0 kubenswrapper[8244]: I0318 09:55:33.511647 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00","Type":"ContainerStarted","Data":"03e0c8a2298260aa3a63483fbc7bfb57b4d0366e456b6f98e512ee9a034418aa"} Mar 18 09:55:33.517652 master-0 kubenswrapper[8244]: I0318 09:55:33.517491 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" event={"ID":"db52ca42-e458-407f-9eeb-bf6de6405edc","Type":"ContainerStarted","Data":"7bc07367dd052649b8080488f5642d3b8b2459ecca751c0ccf8436cc35e93048"} Mar 18 09:55:33.518260 master-0 kubenswrapper[8244]: I0318 09:55:33.518143 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:55:33.522690 master-0 kubenswrapper[8244]: I0318 09:55:33.522629 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"5fb70bf3-93cd-4000-be1a-8e21846d5709","Type":"ContainerStarted","Data":"22a0f37f7177929cbf4f5043d36e78b2ea4f84b8562060ced4185a407eb57943"} Mar 18 09:55:33.522690 master-0 kubenswrapper[8244]: I0318 09:55:33.522675 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"5fb70bf3-93cd-4000-be1a-8e21846d5709","Type":"ContainerStarted","Data":"1e692e8ac748487a3686bf48bba0af89ab5710b4a4e9840c96ef2c14535ec26e"} Mar 18 09:55:33.524627 master-0 kubenswrapper[8244]: I0318 09:55:33.524580 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 09:55:33.527028 master-0 kubenswrapper[8244]: I0318 09:55:33.526894 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z9sf5" event={"ID":"da04c6fa-4916-4bed-a6b2-cc92bf2ee379","Type":"ContainerStarted","Data":"96b902b262e59884b99f1a3c34f6487d733afb78de980d478a4eb56175d2a610"} Mar 18 09:55:33.527028 master-0 kubenswrapper[8244]: I0318 09:55:33.526937 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z9sf5" event={"ID":"da04c6fa-4916-4bed-a6b2-cc92bf2ee379","Type":"ContainerStarted","Data":"9e0adfc587f2973b98bb81ebc6a8994d7c59a5ef12b69d74a2d4a5707b49a2c8"} Mar 18 09:55:33.527465 master-0 kubenswrapper[8244]: I0318 09:55:33.527317 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:33.539048 master-0 kubenswrapper[8244]: I0318 09:55:33.538251 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" event={"ID":"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4","Type":"ContainerStarted","Data":"37124343fb8209ca549ff671c560cfcd2f841cdc0b622af9f05faea1d0440b44"} Mar 18 09:55:33.539048 master-0 kubenswrapper[8244]: I0318 09:55:33.538315 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" event={"ID":"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4","Type":"ContainerStarted","Data":"d329cbff3f93c0797d55bbc4989994ef6bde775d852d69c46ec0c0eadff97f83"} Mar 18 09:55:33.540808 master-0 kubenswrapper[8244]: I0318 09:55:33.540089 8244 generic.go:334] "Generic (PLEG): container finished" podID="8b906fc0-f2bf-4586-97e6-921bbd467b65" containerID="ca2bd4c098fa7a5b008bdac56aadab357bb0951ab5e2ff2f404990c8c28ed3a8" exitCode=0 Mar 18 09:55:33.540808 master-0 kubenswrapper[8244]: I0318 09:55:33.540162 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" event={"ID":"8b906fc0-f2bf-4586-97e6-921bbd467b65","Type":"ContainerDied","Data":"ca2bd4c098fa7a5b008bdac56aadab357bb0951ab5e2ff2f404990c8c28ed3a8"} Mar 18 09:55:33.547936 master-0 kubenswrapper[8244]: I0318 09:55:33.546613 8244 generic.go:334] "Generic (PLEG): container finished" podID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerID="2f45eb55b88d94206ed5a68b6e7edfd43cd25729bac030b2a8ee190f8b3e4b8f" exitCode=0 Mar 18 09:55:33.547936 master-0 kubenswrapper[8244]: I0318 09:55:33.546729 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" event={"ID":"0c7b317c-d141-4e69-9c82-4a5dda6c3248","Type":"ContainerDied","Data":"2f45eb55b88d94206ed5a68b6e7edfd43cd25729bac030b2a8ee190f8b3e4b8f"} Mar 18 09:55:33.559499 master-0 kubenswrapper[8244]: I0318 09:55:33.558339 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" event={"ID":"6f266bad-8b30-4300-ad93-9d48e61f2440","Type":"ContainerStarted","Data":"fb1e06109c9333d787d8e6b957a55759794e573da59639d9f2a8746b35212fab"} Mar 18 09:55:33.559499 master-0 kubenswrapper[8244]: I0318 09:55:33.559444 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:55:33.571609 master-0 kubenswrapper[8244]: I0318 09:55:33.571525 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"ef9dd029-9f8c-4f55-806b-e08ecd088607","Type":"ContainerStarted","Data":"354bc8af8c44a8efe3d6f13fc31abc79fcefb28d3a122046caeb3cb9b5eae2f2"} Mar 18 09:55:33.571609 master-0 kubenswrapper[8244]: I0318 09:55:33.571596 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"ef9dd029-9f8c-4f55-806b-e08ecd088607","Type":"ContainerStarted","Data":"dcb91b69dcbf9d3f889dabaaabd1985969376253eac4aef42776025c49f17438"} Mar 18 09:55:33.582593 master-0 kubenswrapper[8244]: I0318 09:55:33.582376 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr"] Mar 18 09:55:33.592332 master-0 kubenswrapper[8244]: I0318 09:55:33.592298 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-c2qzr"] Mar 18 09:55:33.596871 master-0 kubenswrapper[8244]: I0318 09:55:33.596137 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" event={"ID":"ee376320-9ca0-444d-ab37-9cbcb6729b11","Type":"ContainerStarted","Data":"6410a8c11f0437a2879f1434cfaaf9d03f57e9770536169360a5b016573b78a5"} Mar 18 09:55:33.596871 master-0 kubenswrapper[8244]: I0318 09:55:33.596525 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:55:33.598220 master-0 kubenswrapper[8244]: I0318 09:55:33.597793 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 09:55:33.612845 master-0 kubenswrapper[8244]: I0318 09:55:33.609266 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_8c4bc848-8103-45a9-acfd-59bc686bea98/installer/0.log" Mar 18 09:55:33.612845 master-0 kubenswrapper[8244]: I0318 09:55:33.609327 8244 generic.go:334] "Generic (PLEG): container finished" podID="8c4bc848-8103-45a9-acfd-59bc686bea98" containerID="34c5bc2990dbadb86b221d1573828b60db565416ad77ee40a672328a12258e3b" exitCode=1 Mar 18 09:55:33.612845 master-0 kubenswrapper[8244]: I0318 09:55:33.609427 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"8c4bc848-8103-45a9-acfd-59bc686bea98","Type":"ContainerDied","Data":"34c5bc2990dbadb86b221d1573828b60db565416ad77ee40a672328a12258e3b"} Mar 18 09:55:33.612845 master-0 kubenswrapper[8244]: I0318 09:55:33.609514 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 09:55:33.631304 master-0 kubenswrapper[8244]: I0318 09:55:33.626043 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" event={"ID":"9f251e1b-0f5d-460f-8152-c9201dba0cff","Type":"ContainerStarted","Data":"84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d"} Mar 18 09:55:33.631304 master-0 kubenswrapper[8244]: I0318 09:55:33.626220 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" podUID="9f251e1b-0f5d-460f-8152-c9201dba0cff" containerName="route-controller-manager" containerID="cri-o://84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d" gracePeriod=30 Mar 18 09:55:33.631304 master-0 kubenswrapper[8244]: I0318 09:55:33.626563 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:33.640617 master-0 kubenswrapper[8244]: I0318 09:55:33.640575 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:33.660052 master-0 kubenswrapper[8244]: I0318 09:55:33.648754 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" event={"ID":"d4d2218c-f9df-4d43-8727-ed3a920e23f7","Type":"ContainerStarted","Data":"2ad786c56f6dcaf1e2cffec16812c116ea52e84ada296839ebfedd3ef5e41741"} Mar 18 09:55:33.660052 master-0 kubenswrapper[8244]: I0318 09:55:33.649342 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:55:33.660052 master-0 kubenswrapper[8244]: I0318 09:55:33.652948 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tbxt4" event={"ID":"0442ec6c-5973-40a5-a0c3-dc02de46d343","Type":"ContainerStarted","Data":"ef5d35b0bdd83283c0d235d808ced136108e7206c52b37235790a6b7e0aba640"} Mar 18 09:55:33.660052 master-0 kubenswrapper[8244]: I0318 09:55:33.652976 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tbxt4" event={"ID":"0442ec6c-5973-40a5-a0c3-dc02de46d343","Type":"ContainerStarted","Data":"81f7fce3a095c0cfbf66000345e95ae9b76f32e4b5618679d43a083a19d475f1"} Mar 18 09:55:33.699483 master-0 kubenswrapper[8244]: I0318 09:55:33.699014 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=11.698992397 podStartE2EDuration="11.698992397s" podCreationTimestamp="2026-03-18 09:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:33.635572942 +0000 UTC m=+50.115309080" watchObservedRunningTime="2026-03-18 09:55:33.698992397 +0000 UTC m=+50.178728535" Mar 18 09:55:33.711196 master-0 kubenswrapper[8244]: I0318 09:55:33.711120 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s"] Mar 18 09:55:33.711367 master-0 kubenswrapper[8244]: E0318 09:55:33.711346 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84630bf5-3d03-48ec-9b0c-34034f6181d4" containerName="controller-manager" Mar 18 09:55:33.711367 master-0 kubenswrapper[8244]: I0318 09:55:33.711361 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="84630bf5-3d03-48ec-9b0c-34034f6181d4" containerName="controller-manager" Mar 18 09:55:33.711418 master-0 kubenswrapper[8244]: E0318 09:55:33.711381 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f8941b-dba2-40ba-86d5-3318f5b635cc" containerName="cluster-version-operator" Mar 18 09:55:33.711418 master-0 kubenswrapper[8244]: I0318 09:55:33.711390 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f8941b-dba2-40ba-86d5-3318f5b635cc" containerName="cluster-version-operator" Mar 18 09:55:33.711672 master-0 kubenswrapper[8244]: I0318 09:55:33.711490 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="84630bf5-3d03-48ec-9b0c-34034f6181d4" containerName="controller-manager" Mar 18 09:55:33.711672 master-0 kubenswrapper[8244]: I0318 09:55:33.711513 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f8941b-dba2-40ba-86d5-3318f5b635cc" containerName="cluster-version-operator" Mar 18 09:55:33.715877 master-0 kubenswrapper[8244]: I0318 09:55:33.715807 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.730954 master-0 kubenswrapper[8244]: I0318 09:55:33.720636 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 09:55:33.730954 master-0 kubenswrapper[8244]: I0318 09:55:33.720921 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 09:55:33.730954 master-0 kubenswrapper[8244]: I0318 09:55:33.721270 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 09:55:33.752305 master-0 kubenswrapper[8244]: I0318 09:55:33.749944 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f8941b-dba2-40ba-86d5-3318f5b635cc" path="/var/lib/kubelet/pods/15f8941b-dba2-40ba-86d5-3318f5b635cc/volumes" Mar 18 09:55:33.759004 master-0 kubenswrapper[8244]: I0318 09:55:33.756197 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84630bf5-3d03-48ec-9b0c-34034f6181d4" path="/var/lib/kubelet/pods/84630bf5-3d03-48ec-9b0c-34034f6181d4/volumes" Mar 18 09:55:33.793966 master-0 kubenswrapper[8244]: I0318 09:55:33.793654 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=8.793632154 podStartE2EDuration="8.793632154s" podCreationTimestamp="2026-03-18 09:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:33.757379482 +0000 UTC m=+50.237115620" watchObservedRunningTime="2026-03-18 09:55:33.793632154 +0000 UTC m=+50.273368282" Mar 18 09:55:33.839192 master-0 kubenswrapper[8244]: I0318 09:55:33.825752 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/432f611b-a1a2-4cc9-b005-17a16413d281-service-ca\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.839192 master-0 kubenswrapper[8244]: I0318 09:55:33.826569 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.839192 master-0 kubenswrapper[8244]: I0318 09:55:33.826595 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/432f611b-a1a2-4cc9-b005-17a16413d281-serving-cert\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.839192 master-0 kubenswrapper[8244]: I0318 09:55:33.826748 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/432f611b-a1a2-4cc9-b005-17a16413d281-kube-api-access\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.839192 master-0 kubenswrapper[8244]: I0318 09:55:33.827407 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.914256 master-0 kubenswrapper[8244]: I0318 09:55:33.912072 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=13.912052423 podStartE2EDuration="13.912052423s" podCreationTimestamp="2026-03-18 09:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:33.911749365 +0000 UTC m=+50.391485493" watchObservedRunningTime="2026-03-18 09:55:33.912052423 +0000 UTC m=+50.391788551" Mar 18 09:55:33.928185 master-0 kubenswrapper[8244]: I0318 09:55:33.928037 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/432f611b-a1a2-4cc9-b005-17a16413d281-service-ca\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.928185 master-0 kubenswrapper[8244]: I0318 09:55:33.928099 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.928185 master-0 kubenswrapper[8244]: I0318 09:55:33.928126 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/432f611b-a1a2-4cc9-b005-17a16413d281-serving-cert\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.928185 master-0 kubenswrapper[8244]: I0318 09:55:33.928157 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/432f611b-a1a2-4cc9-b005-17a16413d281-kube-api-access\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.928185 master-0 kubenswrapper[8244]: I0318 09:55:33.928185 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.928581 master-0 kubenswrapper[8244]: I0318 09:55:33.928266 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.929280 master-0 kubenswrapper[8244]: I0318 09:55:33.929251 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/432f611b-a1a2-4cc9-b005-17a16413d281-service-ca\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.929357 master-0 kubenswrapper[8244]: I0318 09:55:33.929318 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.940807 master-0 kubenswrapper[8244]: I0318 09:55:33.936752 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/432f611b-a1a2-4cc9-b005-17a16413d281-serving-cert\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.946887 master-0 kubenswrapper[8244]: I0318 09:55:33.946738 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-z9sf5" podStartSLOduration=9.127752194 podStartE2EDuration="26.946713496s" podCreationTimestamp="2026-03-18 09:55:07 +0000 UTC" firstStartedPulling="2026-03-18 09:55:14.07315096 +0000 UTC m=+30.552887108" lastFinishedPulling="2026-03-18 09:55:31.892112282 +0000 UTC m=+48.371848410" observedRunningTime="2026-03-18 09:55:33.944396331 +0000 UTC m=+50.424132479" watchObservedRunningTime="2026-03-18 09:55:33.946713496 +0000 UTC m=+50.426449624" Mar 18 09:55:33.955864 master-0 kubenswrapper[8244]: I0318 09:55:33.955796 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/432f611b-a1a2-4cc9-b005-17a16413d281-kube-api-access\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:33.973300 master-0 kubenswrapper[8244]: I0318 09:55:33.973254 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_8c4bc848-8103-45a9-acfd-59bc686bea98/installer/0.log" Mar 18 09:55:33.973538 master-0 kubenswrapper[8244]: I0318 09:55:33.973331 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:34.029245 master-0 kubenswrapper[8244]: I0318 09:55:34.014307 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2km"] Mar 18 09:55:34.030211 master-0 kubenswrapper[8244]: E0318 09:55:34.030182 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4bc848-8103-45a9-acfd-59bc686bea98" containerName="installer" Mar 18 09:55:34.030301 master-0 kubenswrapper[8244]: I0318 09:55:34.030289 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4bc848-8103-45a9-acfd-59bc686bea98" containerName="installer" Mar 18 09:55:34.030476 master-0 kubenswrapper[8244]: I0318 09:55:34.030463 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4bc848-8103-45a9-acfd-59bc686bea98" containerName="installer" Mar 18 09:55:34.031301 master-0 kubenswrapper[8244]: I0318 09:55:34.031285 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.074847 master-0 kubenswrapper[8244]: I0318 09:55:34.068442 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2km"] Mar 18 09:55:34.075248 master-0 kubenswrapper[8244]: I0318 09:55:34.075213 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 09:55:34.137847 master-0 kubenswrapper[8244]: I0318 09:55:34.134003 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-var-lock\") pod \"8c4bc848-8103-45a9-acfd-59bc686bea98\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " Mar 18 09:55:34.137847 master-0 kubenswrapper[8244]: I0318 09:55:34.134125 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-kubelet-dir\") pod \"8c4bc848-8103-45a9-acfd-59bc686bea98\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " Mar 18 09:55:34.137847 master-0 kubenswrapper[8244]: I0318 09:55:34.134642 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c4bc848-8103-45a9-acfd-59bc686bea98-kube-api-access\") pod \"8c4bc848-8103-45a9-acfd-59bc686bea98\" (UID: \"8c4bc848-8103-45a9-acfd-59bc686bea98\") " Mar 18 09:55:34.137847 master-0 kubenswrapper[8244]: I0318 09:55:34.134776 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-catalog-content\") pod \"redhat-marketplace-4s2km\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.137847 master-0 kubenswrapper[8244]: I0318 09:55:34.134843 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-utilities\") pod \"redhat-marketplace-4s2km\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.137847 master-0 kubenswrapper[8244]: I0318 09:55:34.134874 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8zbz\" (UniqueName: \"kubernetes.io/projected/2a4c7d0e-10a1-44d1-8874-8e2a76753106-kube-api-access-k8zbz\") pod \"redhat-marketplace-4s2km\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.137847 master-0 kubenswrapper[8244]: I0318 09:55:34.134520 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-var-lock" (OuterVolumeSpecName: "var-lock") pod "8c4bc848-8103-45a9-acfd-59bc686bea98" (UID: "8c4bc848-8103-45a9-acfd-59bc686bea98"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:34.137847 master-0 kubenswrapper[8244]: I0318 09:55:34.134538 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8c4bc848-8103-45a9-acfd-59bc686bea98" (UID: "8c4bc848-8103-45a9-acfd-59bc686bea98"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:34.149852 master-0 kubenswrapper[8244]: I0318 09:55:34.146734 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c4bc848-8103-45a9-acfd-59bc686bea98-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8c4bc848-8103-45a9-acfd-59bc686bea98" (UID: "8c4bc848-8103-45a9-acfd-59bc686bea98"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:34.158355 master-0 kubenswrapper[8244]: I0318 09:55:34.155702 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 09:55:34.241742 master-0 kubenswrapper[8244]: I0318 09:55:34.241690 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-utilities\") pod \"redhat-marketplace-4s2km\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.241742 master-0 kubenswrapper[8244]: I0318 09:55:34.241742 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8zbz\" (UniqueName: \"kubernetes.io/projected/2a4c7d0e-10a1-44d1-8874-8e2a76753106-kube-api-access-k8zbz\") pod \"redhat-marketplace-4s2km\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.241982 master-0 kubenswrapper[8244]: I0318 09:55:34.241780 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-catalog-content\") pod \"redhat-marketplace-4s2km\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.241982 master-0 kubenswrapper[8244]: I0318 09:55:34.241841 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:34.241982 master-0 kubenswrapper[8244]: I0318 09:55:34.241852 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c4bc848-8103-45a9-acfd-59bc686bea98-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:34.241982 master-0 kubenswrapper[8244]: I0318 09:55:34.241861 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8c4bc848-8103-45a9-acfd-59bc686bea98-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:34.242230 master-0 kubenswrapper[8244]: I0318 09:55:34.242203 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-catalog-content\") pod \"redhat-marketplace-4s2km\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.242647 master-0 kubenswrapper[8244]: I0318 09:55:34.242609 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-utilities\") pod \"redhat-marketplace-4s2km\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.273590 master-0 kubenswrapper[8244]: I0318 09:55:34.265932 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8zbz\" (UniqueName: \"kubernetes.io/projected/2a4c7d0e-10a1-44d1-8874-8e2a76753106-kube-api-access-k8zbz\") pod \"redhat-marketplace-4s2km\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.273590 master-0 kubenswrapper[8244]: I0318 09:55:34.269224 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:34.346681 master-0 kubenswrapper[8244]: I0318 09:55:34.346109 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" podStartSLOduration=11.783997891 podStartE2EDuration="29.346091884s" podCreationTimestamp="2026-03-18 09:55:05 +0000 UTC" firstStartedPulling="2026-03-18 09:55:14.338408891 +0000 UTC m=+30.818145019" lastFinishedPulling="2026-03-18 09:55:31.900502874 +0000 UTC m=+48.380239012" observedRunningTime="2026-03-18 09:55:34.34553367 +0000 UTC m=+50.825269798" watchObservedRunningTime="2026-03-18 09:55:34.346091884 +0000 UTC m=+50.825828022" Mar 18 09:55:34.414656 master-0 kubenswrapper[8244]: I0318 09:55:34.414599 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hn6md"] Mar 18 09:55:34.414863 master-0 kubenswrapper[8244]: E0318 09:55:34.414790 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f251e1b-0f5d-460f-8152-c9201dba0cff" containerName="route-controller-manager" Mar 18 09:55:34.414863 master-0 kubenswrapper[8244]: I0318 09:55:34.414801 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f251e1b-0f5d-460f-8152-c9201dba0cff" containerName="route-controller-manager" Mar 18 09:55:34.414918 master-0 kubenswrapper[8244]: I0318 09:55:34.414894 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f251e1b-0f5d-460f-8152-c9201dba0cff" containerName="route-controller-manager" Mar 18 09:55:34.415446 master-0 kubenswrapper[8244]: I0318 09:55:34.415421 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:34.425513 master-0 kubenswrapper[8244]: I0318 09:55:34.425463 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hn6md"] Mar 18 09:55:34.451130 master-0 kubenswrapper[8244]: I0318 09:55:34.449993 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-client-ca\") pod \"9f251e1b-0f5d-460f-8152-c9201dba0cff\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " Mar 18 09:55:34.451130 master-0 kubenswrapper[8244]: I0318 09:55:34.450054 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26snd\" (UniqueName: \"kubernetes.io/projected/9f251e1b-0f5d-460f-8152-c9201dba0cff-kube-api-access-26snd\") pod \"9f251e1b-0f5d-460f-8152-c9201dba0cff\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " Mar 18 09:55:34.451130 master-0 kubenswrapper[8244]: I0318 09:55:34.450086 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f251e1b-0f5d-460f-8152-c9201dba0cff-serving-cert\") pod \"9f251e1b-0f5d-460f-8152-c9201dba0cff\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " Mar 18 09:55:34.451130 master-0 kubenswrapper[8244]: I0318 09:55:34.450111 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-config\") pod \"9f251e1b-0f5d-460f-8152-c9201dba0cff\" (UID: \"9f251e1b-0f5d-460f-8152-c9201dba0cff\") " Mar 18 09:55:34.451130 master-0 kubenswrapper[8244]: I0318 09:55:34.450204 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-utilities\") pod \"redhat-operators-hn6md\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:34.451130 master-0 kubenswrapper[8244]: I0318 09:55:34.450247 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pws8f\" (UniqueName: \"kubernetes.io/projected/af588cc6-5c57-4fea-a8db-84bf34b647a3-kube-api-access-pws8f\") pod \"redhat-operators-hn6md\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:34.451130 master-0 kubenswrapper[8244]: I0318 09:55:34.450424 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-catalog-content\") pod \"redhat-operators-hn6md\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:34.453536 master-0 kubenswrapper[8244]: I0318 09:55:34.453343 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f251e1b-0f5d-460f-8152-c9201dba0cff-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9f251e1b-0f5d-460f-8152-c9201dba0cff" (UID: "9f251e1b-0f5d-460f-8152-c9201dba0cff"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:55:34.453536 master-0 kubenswrapper[8244]: I0318 09:55:34.453512 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-config" (OuterVolumeSpecName: "config") pod "9f251e1b-0f5d-460f-8152-c9201dba0cff" (UID: "9f251e1b-0f5d-460f-8152-c9201dba0cff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:34.454107 master-0 kubenswrapper[8244]: I0318 09:55:34.453617 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-client-ca" (OuterVolumeSpecName: "client-ca") pod "9f251e1b-0f5d-460f-8152-c9201dba0cff" (UID: "9f251e1b-0f5d-460f-8152-c9201dba0cff"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:55:34.455333 master-0 kubenswrapper[8244]: I0318 09:55:34.455197 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f251e1b-0f5d-460f-8152-c9201dba0cff-kube-api-access-26snd" (OuterVolumeSpecName: "kube-api-access-26snd") pod "9f251e1b-0f5d-460f-8152-c9201dba0cff" (UID: "9f251e1b-0f5d-460f-8152-c9201dba0cff"). InnerVolumeSpecName "kube-api-access-26snd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:34.493472 master-0 kubenswrapper[8244]: I0318 09:55:34.493368 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:55:34.554025 master-0 kubenswrapper[8244]: I0318 09:55:34.551860 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pws8f\" (UniqueName: \"kubernetes.io/projected/af588cc6-5c57-4fea-a8db-84bf34b647a3-kube-api-access-pws8f\") pod \"redhat-operators-hn6md\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:34.554025 master-0 kubenswrapper[8244]: I0318 09:55:34.551978 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-catalog-content\") pod \"redhat-operators-hn6md\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:34.554025 master-0 kubenswrapper[8244]: I0318 09:55:34.552014 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-utilities\") pod \"redhat-operators-hn6md\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:34.554025 master-0 kubenswrapper[8244]: I0318 09:55:34.552072 8244 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:34.554025 master-0 kubenswrapper[8244]: I0318 09:55:34.552085 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26snd\" (UniqueName: \"kubernetes.io/projected/9f251e1b-0f5d-460f-8152-c9201dba0cff-kube-api-access-26snd\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:34.554025 master-0 kubenswrapper[8244]: I0318 09:55:34.552094 8244 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f251e1b-0f5d-460f-8152-c9201dba0cff-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:34.554025 master-0 kubenswrapper[8244]: I0318 09:55:34.552104 8244 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f251e1b-0f5d-460f-8152-c9201dba0cff-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:34.554025 master-0 kubenswrapper[8244]: I0318 09:55:34.552511 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-catalog-content\") pod \"redhat-operators-hn6md\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:34.554025 master-0 kubenswrapper[8244]: I0318 09:55:34.552528 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-utilities\") pod \"redhat-operators-hn6md\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:34.659052 master-0 kubenswrapper[8244]: I0318 09:55:34.658990 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" event={"ID":"0c7b317c-d141-4e69-9c82-4a5dda6c3248","Type":"ContainerStarted","Data":"3d52698e47a3bd28b884c4f9760e0868d79ab47917931631b8a30df0b79576a9"} Mar 18 09:55:34.660456 master-0 kubenswrapper[8244]: I0318 09:55:34.660427 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_8c4bc848-8103-45a9-acfd-59bc686bea98/installer/0.log" Mar 18 09:55:34.660568 master-0 kubenswrapper[8244]: I0318 09:55:34.660521 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"8c4bc848-8103-45a9-acfd-59bc686bea98","Type":"ContainerDied","Data":"70e61c4f746140f1e0f601d95aa3dc6fc0eab6a01893679e1752c83571364168"} Mar 18 09:55:34.660568 master-0 kubenswrapper[8244]: I0318 09:55:34.660553 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 09:55:34.660654 master-0 kubenswrapper[8244]: I0318 09:55:34.660582 8244 scope.go:117] "RemoveContainer" containerID="34c5bc2990dbadb86b221d1573828b60db565416ad77ee40a672328a12258e3b" Mar 18 09:55:34.664779 master-0 kubenswrapper[8244]: I0318 09:55:34.664739 8244 generic.go:334] "Generic (PLEG): container finished" podID="9f251e1b-0f5d-460f-8152-c9201dba0cff" containerID="84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d" exitCode=0 Mar 18 09:55:34.665231 master-0 kubenswrapper[8244]: I0318 09:55:34.664803 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" event={"ID":"9f251e1b-0f5d-460f-8152-c9201dba0cff","Type":"ContainerDied","Data":"84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d"} Mar 18 09:55:34.665299 master-0 kubenswrapper[8244]: I0318 09:55:34.665247 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" event={"ID":"9f251e1b-0f5d-460f-8152-c9201dba0cff","Type":"ContainerDied","Data":"b9e56ee47991b15430eb39589af396f99ed92f11ae0487ffd43e56616df53489"} Mar 18 09:55:34.665340 master-0 kubenswrapper[8244]: I0318 09:55:34.665325 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" Mar 18 09:55:34.675567 master-0 kubenswrapper[8244]: I0318 09:55:34.675535 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" event={"ID":"8b906fc0-f2bf-4586-97e6-921bbd467b65","Type":"ContainerStarted","Data":"0636c36ae635990d984b5c5964a8c0d18855626aa7d5892ced2b17fb2e5644af"} Mar 18 09:55:34.678547 master-0 kubenswrapper[8244]: I0318 09:55:34.678510 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" event={"ID":"432f611b-a1a2-4cc9-b005-17a16413d281","Type":"ContainerStarted","Data":"fd996d8153064578e39564038db6d922a85643610cafc41bae9a4fe71acf8389"} Mar 18 09:55:34.678685 master-0 kubenswrapper[8244]: I0318 09:55:34.678552 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" event={"ID":"432f611b-a1a2-4cc9-b005-17a16413d281","Type":"ContainerStarted","Data":"296c63b9a082d2c4952a03261f6f9afd9282d74bb23ca7de387e35c413bd5177"} Mar 18 09:55:34.682298 master-0 kubenswrapper[8244]: I0318 09:55:34.682267 8244 scope.go:117] "RemoveContainer" containerID="84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d" Mar 18 09:55:34.698192 master-0 kubenswrapper[8244]: I0318 09:55:34.697061 8244 scope.go:117] "RemoveContainer" containerID="84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d" Mar 18 09:55:34.698192 master-0 kubenswrapper[8244]: E0318 09:55:34.697578 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d\": container with ID starting with 84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d not found: ID does not exist" containerID="84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d" Mar 18 09:55:34.698192 master-0 kubenswrapper[8244]: I0318 09:55:34.697629 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d"} err="failed to get container status \"84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d\": rpc error: code = NotFound desc = could not find container \"84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d\": container with ID starting with 84fe06a7e1df3bf79a51d0e1fb52267eeeef30fa355dff71934faa8c6935266d not found: ID does not exist" Mar 18 09:55:35.278536 master-0 kubenswrapper[8244]: I0318 09:55:35.278377 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7"] Mar 18 09:55:35.279416 master-0 kubenswrapper[8244]: I0318 09:55:35.279372 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.285239 master-0 kubenswrapper[8244]: I0318 09:55:35.285136 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr"] Mar 18 09:55:35.286554 master-0 kubenswrapper[8244]: I0318 09:55:35.286505 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.295480 master-0 kubenswrapper[8244]: W0318 09:55:35.295425 8244 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'master-0' and this object Mar 18 09:55:35.295705 master-0 kubenswrapper[8244]: E0318 09:55:35.295489 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:55:35.295907 master-0 kubenswrapper[8244]: I0318 09:55:35.295871 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:55:35.296214 master-0 kubenswrapper[8244]: I0318 09:55:35.296178 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:55:35.296431 master-0 kubenswrapper[8244]: W0318 09:55:35.296399 8244 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'master-0' and this object Mar 18 09:55:35.296533 master-0 kubenswrapper[8244]: E0318 09:55:35.296437 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:55:35.296859 master-0 kubenswrapper[8244]: I0318 09:55:35.296777 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:55:35.297094 master-0 kubenswrapper[8244]: W0318 09:55:35.297032 8244 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'master-0' and this object Mar 18 09:55:35.297094 master-0 kubenswrapper[8244]: E0318 09:55:35.297072 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:55:35.297254 master-0 kubenswrapper[8244]: W0318 09:55:35.297148 8244 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'master-0' and this object Mar 18 09:55:35.297254 master-0 kubenswrapper[8244]: E0318 09:55:35.297172 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:55:35.298273 master-0 kubenswrapper[8244]: W0318 09:55:35.298236 8244 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'master-0' and this object Mar 18 09:55:35.298409 master-0 kubenswrapper[8244]: E0318 09:55:35.298278 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:55:35.299778 master-0 kubenswrapper[8244]: I0318 09:55:35.299629 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:55:35.299905 master-0 kubenswrapper[8244]: I0318 09:55:35.299855 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:55:35.305858 master-0 kubenswrapper[8244]: I0318 09:55:35.305793 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:55:35.367081 master-0 kubenswrapper[8244]: I0318 09:55:35.367014 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-config\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.367081 master-0 kubenswrapper[8244]: I0318 09:55:35.367065 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkm9m\" (UniqueName: \"kubernetes.io/projected/54e26470-5ffb-4673-9375-e80031cc6750-kube-api-access-bkm9m\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.367081 master-0 kubenswrapper[8244]: I0318 09:55:35.367082 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-serving-cert\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.367378 master-0 kubenswrapper[8244]: I0318 09:55:35.367102 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54e26470-5ffb-4673-9375-e80031cc6750-serving-cert\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.367378 master-0 kubenswrapper[8244]: I0318 09:55:35.367138 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-client-ca\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.367378 master-0 kubenswrapper[8244]: I0318 09:55:35.367152 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-client-ca\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.367378 master-0 kubenswrapper[8244]: I0318 09:55:35.367182 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-proxy-ca-bundles\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.367378 master-0 kubenswrapper[8244]: I0318 09:55:35.367199 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-config\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.367378 master-0 kubenswrapper[8244]: I0318 09:55:35.367216 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqfd\" (UniqueName: \"kubernetes.io/projected/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-kube-api-access-6jqfd\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.468062 master-0 kubenswrapper[8244]: I0318 09:55:35.467965 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-client-ca\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.468062 master-0 kubenswrapper[8244]: I0318 09:55:35.468061 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-client-ca\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.468575 master-0 kubenswrapper[8244]: I0318 09:55:35.468533 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-proxy-ca-bundles\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.468733 master-0 kubenswrapper[8244]: I0318 09:55:35.468707 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-config\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.468940 master-0 kubenswrapper[8244]: I0318 09:55:35.468913 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqfd\" (UniqueName: \"kubernetes.io/projected/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-kube-api-access-6jqfd\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.469198 master-0 kubenswrapper[8244]: I0318 09:55:35.469171 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-config\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.469374 master-0 kubenswrapper[8244]: I0318 09:55:35.469345 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkm9m\" (UniqueName: \"kubernetes.io/projected/54e26470-5ffb-4673-9375-e80031cc6750-kube-api-access-bkm9m\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.469540 master-0 kubenswrapper[8244]: I0318 09:55:35.469513 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-serving-cert\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.469699 master-0 kubenswrapper[8244]: I0318 09:55:35.469675 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54e26470-5ffb-4673-9375-e80031cc6750-serving-cert\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.469866 master-0 kubenswrapper[8244]: I0318 09:55:35.469435 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-client-ca\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.470056 master-0 kubenswrapper[8244]: I0318 09:55:35.470015 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-client-ca\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:35.470791 master-0 kubenswrapper[8244]: I0318 09:55:35.470735 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-config\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.472765 master-0 kubenswrapper[8244]: I0318 09:55:35.472717 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-serving-cert\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:35.687039 master-0 kubenswrapper[8244]: I0318 09:55:35.686990 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" event={"ID":"0c7b317c-d141-4e69-9c82-4a5dda6c3248","Type":"ContainerStarted","Data":"1f6a02cceec99c5c0fc089d5dabd1d753700f027c3be6e365ad2d7a5a87ba638"} Mar 18 09:55:35.692854 master-0 kubenswrapper[8244]: I0318 09:55:35.687944 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00" containerName="installer" containerID="cri-o://933b2ad053b9c23c3a2342880b67f40c11f8fa3992eedba2b2625d8844c5e60c" gracePeriod=30 Mar 18 09:55:35.751073 master-0 kubenswrapper[8244]: I0318 09:55:35.750986 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2km"] Mar 18 09:55:35.759339 master-0 kubenswrapper[8244]: I0318 09:55:35.758738 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7"] Mar 18 09:55:35.761054 master-0 kubenswrapper[8244]: I0318 09:55:35.761013 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr"] Mar 18 09:55:36.195872 master-0 kubenswrapper[8244]: I0318 09:55:36.183226 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:55:36.255032 master-0 kubenswrapper[8244]: I0318 09:55:36.254845 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pws8f\" (UniqueName: \"kubernetes.io/projected/af588cc6-5c57-4fea-a8db-84bf34b647a3-kube-api-access-pws8f\") pod \"redhat-operators-hn6md\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:36.389188 master-0 kubenswrapper[8244]: I0318 09:55:36.389097 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:55:36.401678 master-0 kubenswrapper[8244]: I0318 09:55:36.401609 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-proxy-ca-bundles\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:36.411400 master-0 kubenswrapper[8244]: I0318 09:55:36.411340 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:55:36.421663 master-0 kubenswrapper[8244]: I0318 09:55:36.421177 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-config\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:36.471082 master-0 kubenswrapper[8244]: E0318 09:55:36.471003 8244 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:55:36.471304 master-0 kubenswrapper[8244]: E0318 09:55:36.471134 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54e26470-5ffb-4673-9375-e80031cc6750-serving-cert podName:54e26470-5ffb-4673-9375-e80031cc6750 nodeName:}" failed. No retries permitted until 2026-03-18 09:55:36.971105472 +0000 UTC m=+53.450841640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/54e26470-5ffb-4673-9375-e80031cc6750-serving-cert") pod "controller-manager-f8f5f6bc4-87dt7" (UID: "54e26470-5ffb-4673-9375-e80031cc6750") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:55:36.543615 master-0 kubenswrapper[8244]: I0318 09:55:36.543560 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:55:36.545507 master-0 kubenswrapper[8244]: I0318 09:55:36.545468 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:55:36.694887 master-0 kubenswrapper[8244]: I0318 09:55:36.693173 8244 generic.go:334] "Generic (PLEG): container finished" podID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerID="d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b" exitCode=0 Mar 18 09:55:36.694887 master-0 kubenswrapper[8244]: I0318 09:55:36.693798 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2km" event={"ID":"2a4c7d0e-10a1-44d1-8874-8e2a76753106","Type":"ContainerDied","Data":"d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b"} Mar 18 09:55:36.694887 master-0 kubenswrapper[8244]: I0318 09:55:36.693880 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2km" event={"ID":"2a4c7d0e-10a1-44d1-8874-8e2a76753106","Type":"ContainerStarted","Data":"08783743f52be89af4082b555c9edcdac7a39fe043de87c8d2e069b82ff73c86"} Mar 18 09:55:36.724384 master-0 kubenswrapper[8244]: I0318 09:55:36.722700 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:55:36.914965 master-0 kubenswrapper[8244]: I0318 09:55:36.913966 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 09:55:36.914965 master-0 kubenswrapper[8244]: I0318 09:55:36.914183 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="ef9dd029-9f8c-4f55-806b-e08ecd088607" containerName="installer" containerID="cri-o://354bc8af8c44a8efe3d6f13fc31abc79fcefb28d3a122046caeb3cb9b5eae2f2" gracePeriod=30 Mar 18 09:55:36.932920 master-0 kubenswrapper[8244]: I0318 09:55:36.932872 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkm9m\" (UniqueName: \"kubernetes.io/projected/54e26470-5ffb-4673-9375-e80031cc6750-kube-api-access-bkm9m\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:36.944206 master-0 kubenswrapper[8244]: I0318 09:55:36.944152 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqfd\" (UniqueName: \"kubernetes.io/projected/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-kube-api-access-6jqfd\") pod \"route-controller-manager-54cf6885f8-xsgcr\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:37.008861 master-0 kubenswrapper[8244]: I0318 09:55:37.007997 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54e26470-5ffb-4673-9375-e80031cc6750-serving-cert\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:37.012004 master-0 kubenswrapper[8244]: I0318 09:55:37.011955 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54e26470-5ffb-4673-9375-e80031cc6750-serving-cert\") pod \"controller-manager-f8f5f6bc4-87dt7\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:37.109162 master-0 kubenswrapper[8244]: I0318 09:55:37.109094 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:37.121459 master-0 kubenswrapper[8244]: I0318 09:55:37.121391 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:37.275909 master-0 kubenswrapper[8244]: I0318 09:55:37.275192 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 09:55:37.699099 master-0 kubenswrapper[8244]: I0318 09:55:37.699055 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_2b86644b-ddbd-4b14-b82d-b7d614f7f81e/installer/0.log" Mar 18 09:55:37.699099 master-0 kubenswrapper[8244]: I0318 09:55:37.699101 8244 generic.go:334] "Generic (PLEG): container finished" podID="2b86644b-ddbd-4b14-b82d-b7d614f7f81e" containerID="826610ccc7ba64519b97c82e3e527d6dc4e2a131529f71a75f5c480a046f7aa6" exitCode=1 Mar 18 09:55:37.699579 master-0 kubenswrapper[8244]: I0318 09:55:37.699134 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"2b86644b-ddbd-4b14-b82d-b7d614f7f81e","Type":"ContainerDied","Data":"826610ccc7ba64519b97c82e3e527d6dc4e2a131529f71a75f5c480a046f7aa6"} Mar 18 09:55:37.700622 master-0 kubenswrapper[8244]: I0318 09:55:37.700593 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_ef9dd029-9f8c-4f55-806b-e08ecd088607/installer/0.log" Mar 18 09:55:37.700667 master-0 kubenswrapper[8244]: I0318 09:55:37.700634 8244 generic.go:334] "Generic (PLEG): container finished" podID="ef9dd029-9f8c-4f55-806b-e08ecd088607" containerID="354bc8af8c44a8efe3d6f13fc31abc79fcefb28d3a122046caeb3cb9b5eae2f2" exitCode=1 Mar 18 09:55:37.700667 master-0 kubenswrapper[8244]: I0318 09:55:37.700655 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"ef9dd029-9f8c-4f55-806b-e08ecd088607","Type":"ContainerDied","Data":"354bc8af8c44a8efe3d6f13fc31abc79fcefb28d3a122046caeb3cb9b5eae2f2"} Mar 18 09:55:37.848032 master-0 kubenswrapper[8244]: I0318 09:55:37.847935 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 09:55:37.849244 master-0 kubenswrapper[8244]: I0318 09:55:37.848918 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:37.858345 master-0 kubenswrapper[8244]: I0318 09:55:37.857860 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2n6d2"] Mar 18 09:55:37.859955 master-0 kubenswrapper[8244]: I0318 09:55:37.859909 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:37.921782 master-0 kubenswrapper[8244]: I0318 09:55:37.921661 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d7edd6-7975-468e-adea-138d92ed1be1-kube-api-access\") pod \"installer-2-master-0\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:37.921782 master-0 kubenswrapper[8244]: I0318 09:55:37.921748 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:37.921782 master-0 kubenswrapper[8244]: I0318 09:55:37.921788 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-var-lock\") pod \"installer-2-master-0\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:37.921782 master-0 kubenswrapper[8244]: I0318 09:55:37.921810 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-utilities\") pod \"certified-operators-2n6d2\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:37.922280 master-0 kubenswrapper[8244]: I0318 09:55:37.921870 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6fq6\" (UniqueName: \"kubernetes.io/projected/305c97a4-eb1b-4104-b9ba-2603229899b0-kube-api-access-c6fq6\") pod \"certified-operators-2n6d2\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:37.922280 master-0 kubenswrapper[8244]: I0318 09:55:37.921903 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-catalog-content\") pod \"certified-operators-2n6d2\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:37.938469 master-0 kubenswrapper[8244]: I0318 09:55:37.938382 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:37.940537 master-0 kubenswrapper[8244]: I0318 09:55:37.938545 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.023103 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d7edd6-7975-468e-adea-138d92ed1be1-kube-api-access\") pod \"installer-2-master-0\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.023508 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.023620 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-var-lock\") pod \"installer-2-master-0\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.023652 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-utilities\") pod \"certified-operators-2n6d2\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.023684 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6fq6\" (UniqueName: \"kubernetes.io/projected/305c97a4-eb1b-4104-b9ba-2603229899b0-kube-api-access-c6fq6\") pod \"certified-operators-2n6d2\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.023759 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-catalog-content\") pod \"certified-operators-2n6d2\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.023799 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.024051 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-var-lock\") pod \"installer-2-master-0\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.024105 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-utilities\") pod \"certified-operators-2n6d2\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:38.026034 master-0 kubenswrapper[8244]: I0318 09:55:38.024176 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-catalog-content\") pod \"certified-operators-2n6d2\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:38.107291 master-0 kubenswrapper[8244]: I0318 09:55:38.107211 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 09:55:38.124156 master-0 kubenswrapper[8244]: W0318 09:55:38.121380 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a9c36d0_e3f3_441e_bbab_44703a0eeb70.slice/crio-5e145a875bcfd693a2d0eada78d480516e66f2586ddfa00ba2cc3fc84918f220 WatchSource:0}: Error finding container 5e145a875bcfd693a2d0eada78d480516e66f2586ddfa00ba2cc3fc84918f220: Status 404 returned error can't find the container with id 5e145a875bcfd693a2d0eada78d480516e66f2586ddfa00ba2cc3fc84918f220 Mar 18 09:55:38.148529 master-0 kubenswrapper[8244]: I0318 09:55:38.148479 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_ef9dd029-9f8c-4f55-806b-e08ecd088607/installer/0.log" Mar 18 09:55:38.148710 master-0 kubenswrapper[8244]: I0318 09:55:38.148542 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:38.153792 master-0 kubenswrapper[8244]: I0318 09:55:38.153729 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n6d2"] Mar 18 09:55:38.158223 master-0 kubenswrapper[8244]: I0318 09:55:38.158043 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7"] Mar 18 09:55:38.160126 master-0 kubenswrapper[8244]: I0318 09:55:38.160093 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hn6md"] Mar 18 09:55:38.161991 master-0 kubenswrapper[8244]: I0318 09:55:38.161960 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr"] Mar 18 09:55:38.163876 master-0 kubenswrapper[8244]: I0318 09:55:38.163844 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 09:55:38.226202 master-0 kubenswrapper[8244]: I0318 09:55:38.226111 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef9dd029-9f8c-4f55-806b-e08ecd088607-kube-api-access\") pod \"ef9dd029-9f8c-4f55-806b-e08ecd088607\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " Mar 18 09:55:38.226310 master-0 kubenswrapper[8244]: I0318 09:55:38.226271 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-kubelet-dir\") pod \"ef9dd029-9f8c-4f55-806b-e08ecd088607\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " Mar 18 09:55:38.226362 master-0 kubenswrapper[8244]: I0318 09:55:38.226322 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ef9dd029-9f8c-4f55-806b-e08ecd088607" (UID: "ef9dd029-9f8c-4f55-806b-e08ecd088607"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:38.226437 master-0 kubenswrapper[8244]: I0318 09:55:38.226371 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-var-lock\") pod \"ef9dd029-9f8c-4f55-806b-e08ecd088607\" (UID: \"ef9dd029-9f8c-4f55-806b-e08ecd088607\") " Mar 18 09:55:38.226532 master-0 kubenswrapper[8244]: I0318 09:55:38.226508 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-var-lock" (OuterVolumeSpecName: "var-lock") pod "ef9dd029-9f8c-4f55-806b-e08ecd088607" (UID: "ef9dd029-9f8c-4f55-806b-e08ecd088607"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:38.226780 master-0 kubenswrapper[8244]: I0318 09:55:38.226744 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:38.226780 master-0 kubenswrapper[8244]: I0318 09:55:38.226772 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef9dd029-9f8c-4f55-806b-e08ecd088607-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:38.229237 master-0 kubenswrapper[8244]: I0318 09:55:38.229189 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9dd029-9f8c-4f55-806b-e08ecd088607-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ef9dd029-9f8c-4f55-806b-e08ecd088607" (UID: "ef9dd029-9f8c-4f55-806b-e08ecd088607"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:38.327837 master-0 kubenswrapper[8244]: I0318 09:55:38.327751 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef9dd029-9f8c-4f55-806b-e08ecd088607-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:38.656980 master-0 kubenswrapper[8244]: I0318 09:55:38.656936 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_2b86644b-ddbd-4b14-b82d-b7d614f7f81e/installer/0.log" Mar 18 09:55:38.657133 master-0 kubenswrapper[8244]: I0318 09:55:38.657039 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:38.707057 master-0 kubenswrapper[8244]: I0318 09:55:38.706986 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" event={"ID":"54e26470-5ffb-4673-9375-e80031cc6750","Type":"ContainerStarted","Data":"bc2b518f5588a6b282272226db84509d9098206fb841d766ca2a81d956bdb25e"} Mar 18 09:55:38.709634 master-0 kubenswrapper[8244]: I0318 09:55:38.709589 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_2b86644b-ddbd-4b14-b82d-b7d614f7f81e/installer/0.log" Mar 18 09:55:38.709759 master-0 kubenswrapper[8244]: I0318 09:55:38.709717 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"2b86644b-ddbd-4b14-b82d-b7d614f7f81e","Type":"ContainerDied","Data":"ff35f1dafa8906a2135f2102b22f8fe7a33132cca04a5b8496f6ffb0a27e700f"} Mar 18 09:55:38.709802 master-0 kubenswrapper[8244]: I0318 09:55:38.709756 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 09:55:38.709802 master-0 kubenswrapper[8244]: I0318 09:55:38.709785 8244 scope.go:117] "RemoveContainer" containerID="826610ccc7ba64519b97c82e3e527d6dc4e2a131529f71a75f5c480a046f7aa6" Mar 18 09:55:38.711857 master-0 kubenswrapper[8244]: I0318 09:55:38.711835 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_ef9dd029-9f8c-4f55-806b-e08ecd088607/installer/0.log" Mar 18 09:55:38.711938 master-0 kubenswrapper[8244]: I0318 09:55:38.711907 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"ef9dd029-9f8c-4f55-806b-e08ecd088607","Type":"ContainerDied","Data":"dcb91b69dcbf9d3f889dabaaabd1985969376253eac4aef42776025c49f17438"} Mar 18 09:55:38.712026 master-0 kubenswrapper[8244]: I0318 09:55:38.711992 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:55:38.714076 master-0 kubenswrapper[8244]: I0318 09:55:38.714022 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" event={"ID":"3a9c36d0-e3f3-441e-bbab-44703a0eeb70","Type":"ContainerStarted","Data":"5e145a875bcfd693a2d0eada78d480516e66f2586ddfa00ba2cc3fc84918f220"} Mar 18 09:55:38.715542 master-0 kubenswrapper[8244]: I0318 09:55:38.715479 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn6md" event={"ID":"af588cc6-5c57-4fea-a8db-84bf34b647a3","Type":"ContainerStarted","Data":"bcaf8f561f370518d63f5758dd9df59a375ae07c11f13b0cd1da423c7b17de37"} Mar 18 09:55:38.725913 master-0 kubenswrapper[8244]: I0318 09:55:38.725883 8244 scope.go:117] "RemoveContainer" containerID="354bc8af8c44a8efe3d6f13fc31abc79fcefb28d3a122046caeb3cb9b5eae2f2" Mar 18 09:55:38.732565 master-0 kubenswrapper[8244]: I0318 09:55:38.732521 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kube-api-access\") pod \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " Mar 18 09:55:38.732629 master-0 kubenswrapper[8244]: I0318 09:55:38.732591 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-var-lock\") pod \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " Mar 18 09:55:38.732743 master-0 kubenswrapper[8244]: I0318 09:55:38.732702 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kubelet-dir\") pod \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\" (UID: \"2b86644b-ddbd-4b14-b82d-b7d614f7f81e\") " Mar 18 09:55:38.732965 master-0 kubenswrapper[8244]: I0318 09:55:38.732930 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2b86644b-ddbd-4b14-b82d-b7d614f7f81e" (UID: "2b86644b-ddbd-4b14-b82d-b7d614f7f81e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:38.733017 master-0 kubenswrapper[8244]: I0318 09:55:38.732987 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-var-lock" (OuterVolumeSpecName: "var-lock") pod "2b86644b-ddbd-4b14-b82d-b7d614f7f81e" (UID: "2b86644b-ddbd-4b14-b82d-b7d614f7f81e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:38.736414 master-0 kubenswrapper[8244]: I0318 09:55:38.736387 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2b86644b-ddbd-4b14-b82d-b7d614f7f81e" (UID: "2b86644b-ddbd-4b14-b82d-b7d614f7f81e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:38.834280 master-0 kubenswrapper[8244]: I0318 09:55:38.834229 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:38.834469 master-0 kubenswrapper[8244]: I0318 09:55:38.834319 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:38.834469 master-0 kubenswrapper[8244]: I0318 09:55:38.834364 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2b86644b-ddbd-4b14-b82d-b7d614f7f81e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:39.031836 master-0 kubenswrapper[8244]: I0318 09:55:39.031768 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:39.721617 master-0 kubenswrapper[8244]: I0318 09:55:39.721558 8244 generic.go:334] "Generic (PLEG): container finished" podID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerID="8718f426ea7c61f316713bf92f0fe2e4fac0475e6be4073f7d39f66ad5db68f7" exitCode=0 Mar 18 09:55:39.722171 master-0 kubenswrapper[8244]: I0318 09:55:39.721630 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn6md" event={"ID":"af588cc6-5c57-4fea-a8db-84bf34b647a3","Type":"ContainerDied","Data":"8718f426ea7c61f316713bf92f0fe2e4fac0475e6be4073f7d39f66ad5db68f7"} Mar 18 09:55:39.723647 master-0 kubenswrapper[8244]: I0318 09:55:39.723545 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" event={"ID":"54e26470-5ffb-4673-9375-e80031cc6750","Type":"ContainerStarted","Data":"1248d2a0db71d324c2c95a679e324dd57a6ddd00508bb65cb77279b8a3a015b8"} Mar 18 09:55:39.728109 master-0 kubenswrapper[8244]: I0318 09:55:39.728054 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" event={"ID":"3a9c36d0-e3f3-441e-bbab-44703a0eeb70","Type":"ContainerStarted","Data":"e686ddb757c595904ac6ebc397e0c0f4d654d782c019f30f1e1bf1e5f427b30d"} Mar 18 09:55:39.741129 master-0 kubenswrapper[8244]: I0318 09:55:39.741084 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c4bc848-8103-45a9-acfd-59bc686bea98" path="/var/lib/kubelet/pods/8c4bc848-8103-45a9-acfd-59bc686bea98/volumes" Mar 18 09:55:39.741681 master-0 kubenswrapper[8244]: I0318 09:55:39.741640 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 09:55:40.088085 master-0 kubenswrapper[8244]: I0318 09:55:40.088009 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:40.088331 master-0 kubenswrapper[8244]: I0318 09:55:40.088197 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: I0318 09:55:40.118246 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]etcd ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:55:40.127017 master-0 kubenswrapper[8244]: I0318 09:55:40.118316 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:55:40.186942 master-0 kubenswrapper[8244]: I0318 09:55:40.185665 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d7edd6-7975-468e-adea-138d92ed1be1-kube-api-access\") pod \"installer-2-master-0\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:40.188713 master-0 kubenswrapper[8244]: I0318 09:55:40.188677 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6fq6\" (UniqueName: \"kubernetes.io/projected/305c97a4-eb1b-4104-b9ba-2603229899b0-kube-api-access-c6fq6\") pod \"certified-operators-2n6d2\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:40.289792 master-0 kubenswrapper[8244]: I0318 09:55:40.289726 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:55:40.300192 master-0 kubenswrapper[8244]: I0318 09:55:40.300146 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:55:40.633005 master-0 kubenswrapper[8244]: I0318 09:55:40.632852 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" podStartSLOduration=7.632813684 podStartE2EDuration="7.632813684s" podCreationTimestamp="2026-03-18 09:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:55:40.624433563 +0000 UTC m=+57.104169731" watchObservedRunningTime="2026-03-18 09:55:40.632813684 +0000 UTC m=+57.112549812" Mar 18 09:55:40.636256 master-0 kubenswrapper[8244]: I0318 09:55:40.634380 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 09:55:40.636256 master-0 kubenswrapper[8244]: E0318 09:55:40.634554 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b86644b-ddbd-4b14-b82d-b7d614f7f81e" containerName="installer" Mar 18 09:55:40.636256 master-0 kubenswrapper[8244]: I0318 09:55:40.634565 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b86644b-ddbd-4b14-b82d-b7d614f7f81e" containerName="installer" Mar 18 09:55:40.636256 master-0 kubenswrapper[8244]: E0318 09:55:40.634582 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9dd029-9f8c-4f55-806b-e08ecd088607" containerName="installer" Mar 18 09:55:40.636256 master-0 kubenswrapper[8244]: I0318 09:55:40.634589 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9dd029-9f8c-4f55-806b-e08ecd088607" containerName="installer" Mar 18 09:55:40.636256 master-0 kubenswrapper[8244]: I0318 09:55:40.634672 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b86644b-ddbd-4b14-b82d-b7d614f7f81e" containerName="installer" Mar 18 09:55:40.636256 master-0 kubenswrapper[8244]: I0318 09:55:40.634686 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef9dd029-9f8c-4f55-806b-e08ecd088607" containerName="installer" Mar 18 09:55:40.636256 master-0 kubenswrapper[8244]: I0318 09:55:40.635001 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:55:40.642807 master-0 kubenswrapper[8244]: I0318 09:55:40.642471 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 09:55:40.646878 master-0 kubenswrapper[8244]: I0318 09:55:40.644686 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 09:55:40.668843 master-0 kubenswrapper[8244]: I0318 09:55:40.667535 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:55:40.668843 master-0 kubenswrapper[8244]: I0318 09:55:40.667630 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-var-lock\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:55:40.668843 master-0 kubenswrapper[8244]: I0318 09:55:40.667690 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:55:40.696254 master-0 kubenswrapper[8244]: I0318 09:55:40.695898 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lx2pt"] Mar 18 09:55:40.702577 master-0 kubenswrapper[8244]: I0318 09:55:40.699942 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" podStartSLOduration=16.122453526 podStartE2EDuration="33.699919879s" podCreationTimestamp="2026-03-18 09:55:07 +0000 UTC" firstStartedPulling="2026-03-18 09:55:14.314946976 +0000 UTC m=+30.794683104" lastFinishedPulling="2026-03-18 09:55:31.892413319 +0000 UTC m=+48.372149457" observedRunningTime="2026-03-18 09:55:40.696256431 +0000 UTC m=+57.175992569" watchObservedRunningTime="2026-03-18 09:55:40.699919879 +0000 UTC m=+57.179656007" Mar 18 09:55:40.708854 master-0 kubenswrapper[8244]: I0318 09:55:40.707335 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lx2pt"] Mar 18 09:55:40.708854 master-0 kubenswrapper[8244]: I0318 09:55:40.707502 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:55:40.766573 master-0 kubenswrapper[8244]: I0318 09:55:40.766074 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 09:55:40.770187 master-0 kubenswrapper[8244]: I0318 09:55:40.769056 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:55:40.770187 master-0 kubenswrapper[8244]: I0318 09:55:40.769125 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958k6\" (UniqueName: \"kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:55:40.770187 master-0 kubenswrapper[8244]: I0318 09:55:40.769151 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-catalog-content\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:55:40.770187 master-0 kubenswrapper[8244]: I0318 09:55:40.769180 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:55:40.770187 master-0 kubenswrapper[8244]: I0318 09:55:40.769199 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-var-lock\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:55:40.770187 master-0 kubenswrapper[8244]: I0318 09:55:40.769216 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-utilities\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:55:40.770187 master-0 kubenswrapper[8244]: I0318 09:55:40.769299 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:55:40.770187 master-0 kubenswrapper[8244]: I0318 09:55:40.769330 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-var-lock\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:55:40.777942 master-0 kubenswrapper[8244]: I0318 09:55:40.777896 8244 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 09:55:40.778036 master-0 kubenswrapper[8244]: I0318 09:55:40.777953 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 09:55:40.778178 master-0 kubenswrapper[8244]: E0318 09:55:40.778149 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 09:55:40.778178 master-0 kubenswrapper[8244]: I0318 09:55:40.778170 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 09:55:40.778237 master-0 kubenswrapper[8244]: E0318 09:55:40.778189 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 09:55:40.778237 master-0 kubenswrapper[8244]: I0318 09:55:40.778197 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 09:55:40.778340 master-0 kubenswrapper[8244]: I0318 09:55:40.778313 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 09:55:40.778340 master-0 kubenswrapper[8244]: I0318 09:55:40.778337 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 09:55:40.779254 master-0 kubenswrapper[8244]: I0318 09:55:40.779193 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" containerID="cri-o://f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5" gracePeriod=30 Mar 18 09:55:40.779305 master-0 kubenswrapper[8244]: I0318 09:55:40.779287 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" containerID="cri-o://f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57" gracePeriod=30 Mar 18 09:55:40.780365 master-0 kubenswrapper[8244]: I0318 09:55:40.780336 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.783557 master-0 kubenswrapper[8244]: I0318 09:55:40.783534 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:40.783557 master-0 kubenswrapper[8244]: I0318 09:55:40.783557 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:40.817181 master-0 kubenswrapper[8244]: I0318 09:55:40.817116 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:55:40.817181 master-0 kubenswrapper[8244]: I0318 09:55:40.817158 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 09:55:40.818245 master-0 kubenswrapper[8244]: I0318 09:55:40.818115 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n6d2"] Mar 18 09:55:40.871038 master-0 kubenswrapper[8244]: I0318 09:55:40.871002 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-catalog-content\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:55:40.871162 master-0 kubenswrapper[8244]: I0318 09:55:40.871084 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.871162 master-0 kubenswrapper[8244]: I0318 09:55:40.871145 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.871223 master-0 kubenswrapper[8244]: I0318 09:55:40.871179 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-utilities\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:55:40.871223 master-0 kubenswrapper[8244]: I0318 09:55:40.871195 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.871305 master-0 kubenswrapper[8244]: I0318 09:55:40.871285 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.871343 master-0 kubenswrapper[8244]: I0318 09:55:40.871315 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.871373 master-0 kubenswrapper[8244]: I0318 09:55:40.871362 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-958k6\" (UniqueName: \"kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:55:40.871402 master-0 kubenswrapper[8244]: I0318 09:55:40.871382 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.871980 master-0 kubenswrapper[8244]: I0318 09:55:40.871957 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-catalog-content\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:55:40.873949 master-0 kubenswrapper[8244]: I0318 09:55:40.873924 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-utilities\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:55:40.973089 master-0 kubenswrapper[8244]: I0318 09:55:40.973052 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973175 master-0 kubenswrapper[8244]: I0318 09:55:40.973106 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973175 master-0 kubenswrapper[8244]: I0318 09:55:40.973152 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973239 master-0 kubenswrapper[8244]: I0318 09:55:40.973195 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973239 master-0 kubenswrapper[8244]: I0318 09:55:40.973223 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973290 master-0 kubenswrapper[8244]: I0318 09:55:40.973247 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973365 master-0 kubenswrapper[8244]: I0318 09:55:40.973341 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973417 master-0 kubenswrapper[8244]: I0318 09:55:40.973395 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973449 master-0 kubenswrapper[8244]: I0318 09:55:40.973425 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973477 master-0 kubenswrapper[8244]: I0318 09:55:40.973451 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973508 master-0 kubenswrapper[8244]: I0318 09:55:40.973478 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:40.973535 master-0 kubenswrapper[8244]: I0318 09:55:40.973506 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:55:41.787159 master-0 kubenswrapper[8244]: I0318 09:55:41.787111 8244 generic.go:334] "Generic (PLEG): container finished" podID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerID="03e6fba4231fcdda92e3fad96e79a4e5f2aa602c65c22bc627f57140c57092f0" exitCode=0 Mar 18 09:55:41.787677 master-0 kubenswrapper[8244]: I0318 09:55:41.787203 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n6d2" event={"ID":"305c97a4-eb1b-4104-b9ba-2603229899b0","Type":"ContainerDied","Data":"03e6fba4231fcdda92e3fad96e79a4e5f2aa602c65c22bc627f57140c57092f0"} Mar 18 09:55:41.787677 master-0 kubenswrapper[8244]: I0318 09:55:41.787255 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n6d2" event={"ID":"305c97a4-eb1b-4104-b9ba-2603229899b0","Type":"ContainerStarted","Data":"e851101b44a79cab31320a525983c7e460dfb515d195e81afefdaabb52603f4f"} Mar 18 09:55:41.792089 master-0 kubenswrapper[8244]: I0318 09:55:41.791835 8244 generic.go:334] "Generic (PLEG): container finished" podID="be8bd84c-8035-4bec-a725-b0ae89382c0f" containerID="acbbc72042bd93d1606b83c55c35f1b48dc5dce61f6ad5d66183b045a74dff9a" exitCode=0 Mar 18 09:55:41.792089 master-0 kubenswrapper[8244]: I0318 09:55:41.791963 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"be8bd84c-8035-4bec-a725-b0ae89382c0f","Type":"ContainerDied","Data":"acbbc72042bd93d1606b83c55c35f1b48dc5dce61f6ad5d66183b045a74dff9a"} Mar 18 09:55:41.795752 master-0 kubenswrapper[8244]: I0318 09:55:41.795337 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"a4d7edd6-7975-468e-adea-138d92ed1be1","Type":"ContainerStarted","Data":"3a3c8396e15ffcccb1d7182e3eb6dbd5c5cf86adc58a45d80d2016b54dbad828"} Mar 18 09:55:41.795752 master-0 kubenswrapper[8244]: I0318 09:55:41.795367 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"a4d7edd6-7975-468e-adea-138d92ed1be1","Type":"ContainerStarted","Data":"306e8c3b294ebc0b6118bec332d25f893bead6bde2beb01fbece7b1ede0478ae"} Mar 18 09:55:43.054581 master-0 kubenswrapper[8244]: I0318 09:55:43.054492 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:43.104526 master-0 kubenswrapper[8244]: I0318 09:55:43.104479 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-kubelet-dir\") pod \"be8bd84c-8035-4bec-a725-b0ae89382c0f\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " Mar 18 09:55:43.104526 master-0 kubenswrapper[8244]: I0318 09:55:43.104533 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be8bd84c-8035-4bec-a725-b0ae89382c0f-kube-api-access\") pod \"be8bd84c-8035-4bec-a725-b0ae89382c0f\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " Mar 18 09:55:43.104526 master-0 kubenswrapper[8244]: I0318 09:55:43.104624 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-var-lock\") pod \"be8bd84c-8035-4bec-a725-b0ae89382c0f\" (UID: \"be8bd84c-8035-4bec-a725-b0ae89382c0f\") " Mar 18 09:55:43.104526 master-0 kubenswrapper[8244]: I0318 09:55:43.104632 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "be8bd84c-8035-4bec-a725-b0ae89382c0f" (UID: "be8bd84c-8035-4bec-a725-b0ae89382c0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:43.104526 master-0 kubenswrapper[8244]: I0318 09:55:43.104897 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:43.104526 master-0 kubenswrapper[8244]: I0318 09:55:43.104930 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-var-lock" (OuterVolumeSpecName: "var-lock") pod "be8bd84c-8035-4bec-a725-b0ae89382c0f" (UID: "be8bd84c-8035-4bec-a725-b0ae89382c0f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:55:43.107794 master-0 kubenswrapper[8244]: I0318 09:55:43.107753 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be8bd84c-8035-4bec-a725-b0ae89382c0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "be8bd84c-8035-4bec-a725-b0ae89382c0f" (UID: "be8bd84c-8035-4bec-a725-b0ae89382c0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:55:43.205567 master-0 kubenswrapper[8244]: I0318 09:55:43.205487 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be8bd84c-8035-4bec-a725-b0ae89382c0f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:43.205567 master-0 kubenswrapper[8244]: I0318 09:55:43.205517 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be8bd84c-8035-4bec-a725-b0ae89382c0f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:55:45.219097 master-0 kubenswrapper[8244]: E0318 09:55:45.218934 8244 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.486s" Mar 18 09:55:45.219742 master-0 kubenswrapper[8244]: I0318 09:55:45.219350 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-z9sf5" Mar 18 09:55:45.222642 master-0 kubenswrapper[8244]: I0318 09:55:45.222605 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"be8bd84c-8035-4bec-a725-b0ae89382c0f","Type":"ContainerDied","Data":"cf4889e117bb83c7e1a1800e9a36e897d1db0934994a8b13923df3be14b35ebb"} Mar 18 09:55:45.222750 master-0 kubenswrapper[8244]: I0318 09:55:45.222646 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4889e117bb83c7e1a1800e9a36e897d1db0934994a8b13923df3be14b35ebb" Mar 18 09:55:45.222750 master-0 kubenswrapper[8244]: I0318 09:55:45.222721 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 09:55:45.352613 master-0 kubenswrapper[8244]: E0318 09:55:45.352567 8244 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podbe8bd84c_8035_4bec_a725_b0ae89382c0f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podbe8bd84c_8035_4bec_a725_b0ae89382c0f.slice/crio-cf4889e117bb83c7e1a1800e9a36e897d1db0934994a8b13923df3be14b35ebb\": RecentStats: unable to find data in memory cache]" Mar 18 09:55:53.921189 master-0 kubenswrapper[8244]: E0318 09:55:53.920963 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 09:55:53.922008 master-0 kubenswrapper[8244]: I0318 09:55:53.921962 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: I0318 09:55:54.220779 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [-]etcd failed: reason withheld Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:55:54.221137 master-0 kubenswrapper[8244]: I0318 09:55:54.221066 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:55:56.204940 master-0 kubenswrapper[8244]: W0318 09:55:56.204884 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-fdfb79bc09b65edb18f673107694684f4e551ee32e4553ed4c9d9d36b7d28e43 WatchSource:0}: Error finding container fdfb79bc09b65edb18f673107694684f4e551ee32e4553ed4c9d9d36b7d28e43: Status 404 returned error can't find the container with id fdfb79bc09b65edb18f673107694684f4e551ee32e4553ed4c9d9d36b7d28e43 Mar 18 09:55:56.270205 master-0 kubenswrapper[8244]: I0318 09:55:56.270157 8244 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="0f4ef82cd98a641ac2372a9202df576de9d16287dc2775cc6c0529b93f52b3e6" exitCode=1 Mar 18 09:55:56.270330 master-0 kubenswrapper[8244]: I0318 09:55:56.270227 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"0f4ef82cd98a641ac2372a9202df576de9d16287dc2775cc6c0529b93f52b3e6"} Mar 18 09:55:56.270696 master-0 kubenswrapper[8244]: I0318 09:55:56.270672 8244 scope.go:117] "RemoveContainer" containerID="0f4ef82cd98a641ac2372a9202df576de9d16287dc2775cc6c0529b93f52b3e6" Mar 18 09:55:56.271898 master-0 kubenswrapper[8244]: I0318 09:55:56.271814 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"fdfb79bc09b65edb18f673107694684f4e551ee32e4553ed4c9d9d36b7d28e43"} Mar 18 09:55:56.273589 master-0 kubenswrapper[8244]: I0318 09:55:56.273556 8244 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="5230f2c731392582b4c5b7f1d1739dca596269f4bff091decf0daf9fa0a42c23" exitCode=1 Mar 18 09:55:56.273589 master-0 kubenswrapper[8244]: I0318 09:55:56.273585 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"5230f2c731392582b4c5b7f1d1739dca596269f4bff091decf0daf9fa0a42c23"} Mar 18 09:55:56.278043 master-0 kubenswrapper[8244]: I0318 09:55:56.278012 8244 scope.go:117] "RemoveContainer" containerID="5230f2c731392582b4c5b7f1d1739dca596269f4bff091decf0daf9fa0a42c23" Mar 18 09:55:56.391655 master-0 kubenswrapper[8244]: E0318 09:55:56.391621 8244 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 18 09:55:56.625450 master-0 kubenswrapper[8244]: E0318 09:55:56.625308 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:55:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:55:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:55:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:55:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:55:57.186079 master-0 kubenswrapper[8244]: I0318 09:55:57.186001 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:55:57.186281 master-0 kubenswrapper[8244]: I0318 09:55:57.186107 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:55:57.283880 master-0 kubenswrapper[8244]: I0318 09:55:57.283814 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"25e8b4ad00ce2bdd7986e5a3dbebb908681f21787c999f9ac28c5b382c85fc69"} Mar 18 09:55:57.286522 master-0 kubenswrapper[8244]: I0318 09:55:57.286479 8244 generic.go:334] "Generic (PLEG): container finished" podID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerID="c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937" exitCode=0 Mar 18 09:55:57.286586 master-0 kubenswrapper[8244]: I0318 09:55:57.286559 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2km" event={"ID":"2a4c7d0e-10a1-44d1-8874-8e2a76753106","Type":"ContainerDied","Data":"c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937"} Mar 18 09:55:57.288458 master-0 kubenswrapper[8244]: I0318 09:55:57.288430 8244 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220" exitCode=0 Mar 18 09:55:57.288591 master-0 kubenswrapper[8244]: I0318 09:55:57.288497 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220"} Mar 18 09:55:57.290916 master-0 kubenswrapper[8244]: I0318 09:55:57.290380 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn6md" event={"ID":"af588cc6-5c57-4fea-a8db-84bf34b647a3","Type":"ContainerStarted","Data":"0338cdb13e96b331f60752e9956b2a4b591e432d10014af91991b3918b5996f0"} Mar 18 09:55:57.292500 master-0 kubenswrapper[8244]: I0318 09:55:57.292054 8244 generic.go:334] "Generic (PLEG): container finished" podID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerID="a97c2824af4a8942386c440e962d66b8577475834e78172714b5d24decf0108e" exitCode=0 Mar 18 09:55:57.292500 master-0 kubenswrapper[8244]: I0318 09:55:57.292096 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n6d2" event={"ID":"305c97a4-eb1b-4104-b9ba-2603229899b0","Type":"ContainerDied","Data":"a97c2824af4a8942386c440e962d66b8577475834e78172714b5d24decf0108e"} Mar 18 09:55:57.301757 master-0 kubenswrapper[8244]: I0318 09:55:57.301698 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"614bad60cc203e379c2219ece0e463fc923ffaef207f86d7d7dbe59e9131f846"} Mar 18 09:55:57.331922 master-0 kubenswrapper[8244]: I0318 09:55:57.331847 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:55:58.309386 master-0 kubenswrapper[8244]: I0318 09:55:58.309310 8244 generic.go:334] "Generic (PLEG): container finished" podID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerID="0338cdb13e96b331f60752e9956b2a4b591e432d10014af91991b3918b5996f0" exitCode=0 Mar 18 09:55:58.310379 master-0 kubenswrapper[8244]: I0318 09:55:58.309408 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn6md" event={"ID":"af588cc6-5c57-4fea-a8db-84bf34b647a3","Type":"ContainerDied","Data":"0338cdb13e96b331f60752e9956b2a4b591e432d10014af91991b3918b5996f0"} Mar 18 09:55:58.688133 master-0 kubenswrapper[8244]: I0318 09:55:58.687951 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:55:58.688133 master-0 kubenswrapper[8244]: I0318 09:55:58.688042 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:00.185275 master-0 kubenswrapper[8244]: I0318 09:56:00.185193 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:00.186138 master-0 kubenswrapper[8244]: I0318 09:56:00.185292 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:01.462147 master-0 kubenswrapper[8244]: I0318 09:56:01.462072 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:56:01.688358 master-0 kubenswrapper[8244]: I0318 09:56:01.688200 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:01.688358 master-0 kubenswrapper[8244]: I0318 09:56:01.688265 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:02.333350 master-0 kubenswrapper[8244]: I0318 09:56:02.333275 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-g25jq_3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/openshift-controller-manager-operator/0.log" Mar 18 09:56:02.333529 master-0 kubenswrapper[8244]: I0318 09:56:02.333372 8244 generic.go:334] "Generic (PLEG): container finished" podID="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" containerID="c8f91dc57ea6bc611089a31345d27ad1b6b311c14621b5aebef7b7aac62f0940" exitCode=1 Mar 18 09:56:02.333529 master-0 kubenswrapper[8244]: I0318 09:56:02.333420 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" event={"ID":"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4","Type":"ContainerDied","Data":"c8f91dc57ea6bc611089a31345d27ad1b6b311c14621b5aebef7b7aac62f0940"} Mar 18 09:56:02.334006 master-0 kubenswrapper[8244]: I0318 09:56:02.333965 8244 scope.go:117] "RemoveContainer" containerID="c8f91dc57ea6bc611089a31345d27ad1b6b311c14621b5aebef7b7aac62f0940" Mar 18 09:56:03.186127 master-0 kubenswrapper[8244]: I0318 09:56:03.185946 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:03.186127 master-0 kubenswrapper[8244]: I0318 09:56:03.186049 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:03.187022 master-0 kubenswrapper[8244]: I0318 09:56:03.186155 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:56:03.187104 master-0 kubenswrapper[8244]: I0318 09:56:03.187038 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:03.187173 master-0 kubenswrapper[8244]: I0318 09:56:03.187102 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: I0318 09:56:03.225958 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [-]etcd failed: reason withheld Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:03.226286 master-0 kubenswrapper[8244]: I0318 09:56:03.226277 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:56:04.462730 master-0 kubenswrapper[8244]: I0318 09:56:04.462587 8244 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:04.688264 master-0 kubenswrapper[8244]: I0318 09:56:04.688141 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:04.688589 master-0 kubenswrapper[8244]: I0318 09:56:04.688541 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:04.688811 master-0 kubenswrapper[8244]: I0318 09:56:04.688776 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:56:04.689757 master-0 kubenswrapper[8244]: I0318 09:56:04.689718 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"fe475c93acb3e152a06334aa122f61bc3dfe0a7c617c3c6b5b5bc407433dfd76"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 09:56:04.690006 master-0 kubenswrapper[8244]: I0318 09:56:04.689940 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:04.690117 master-0 kubenswrapper[8244]: I0318 09:56:04.690031 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:04.690117 master-0 kubenswrapper[8244]: I0318 09:56:04.689955 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" containerID="cri-o://fe475c93acb3e152a06334aa122f61bc3dfe0a7c617c3c6b5b5bc407433dfd76" gracePeriod=30 Mar 18 09:56:05.360137 master-0 kubenswrapper[8244]: I0318 09:56:05.360071 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n6d2" event={"ID":"305c97a4-eb1b-4104-b9ba-2603229899b0","Type":"ContainerStarted","Data":"49a23c8f4def9e21a7f49e230fc81a54bd2391353d84a5994b1e32887aa942a1"} Mar 18 09:56:05.361956 master-0 kubenswrapper[8244]: I0318 09:56:05.361924 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-g25jq_3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/openshift-controller-manager-operator/0.log" Mar 18 09:56:05.362088 master-0 kubenswrapper[8244]: I0318 09:56:05.362029 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" event={"ID":"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4","Type":"ContainerStarted","Data":"2795ecc70fe66ee4a0f920912ba6641b4460a6d001aedb4e015ff801933a203d"} Mar 18 09:56:05.364465 master-0 kubenswrapper[8244]: I0318 09:56:05.364441 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2km" event={"ID":"2a4c7d0e-10a1-44d1-8874-8e2a76753106","Type":"ContainerStarted","Data":"5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1"} Mar 18 09:56:05.365989 master-0 kubenswrapper[8244]: I0318 09:56:05.365961 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00/installer/0.log" Mar 18 09:56:05.366122 master-0 kubenswrapper[8244]: I0318 09:56:05.366020 8244 generic.go:334] "Generic (PLEG): container finished" podID="1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00" containerID="933b2ad053b9c23c3a2342880b67f40c11f8fa3992eedba2b2625d8844c5e60c" exitCode=1 Mar 18 09:56:05.366122 master-0 kubenswrapper[8244]: I0318 09:56:05.366052 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00","Type":"ContainerDied","Data":"933b2ad053b9c23c3a2342880b67f40c11f8fa3992eedba2b2625d8844c5e60c"} Mar 18 09:56:06.051099 master-0 kubenswrapper[8244]: I0318 09:56:06.051023 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00/installer/0.log" Mar 18 09:56:06.051706 master-0 kubenswrapper[8244]: I0318 09:56:06.051122 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:56:06.070950 master-0 kubenswrapper[8244]: I0318 09:56:06.070890 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-var-lock\") pod \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " Mar 18 09:56:06.071324 master-0 kubenswrapper[8244]: I0318 09:56:06.070973 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kube-api-access\") pod \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " Mar 18 09:56:06.071324 master-0 kubenswrapper[8244]: I0318 09:56:06.071027 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kubelet-dir\") pod \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\" (UID: \"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00\") " Mar 18 09:56:06.071425 master-0 kubenswrapper[8244]: I0318 09:56:06.071351 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00" (UID: "1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:56:06.071425 master-0 kubenswrapper[8244]: I0318 09:56:06.071396 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-var-lock" (OuterVolumeSpecName: "var-lock") pod "1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00" (UID: "1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:56:06.081993 master-0 kubenswrapper[8244]: I0318 09:56:06.081925 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00" (UID: "1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:56:06.171885 master-0 kubenswrapper[8244]: I0318 09:56:06.171803 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:06.171885 master-0 kubenswrapper[8244]: I0318 09:56:06.171873 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:06.171885 master-0 kubenswrapper[8244]: I0318 09:56:06.171891 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:06.185601 master-0 kubenswrapper[8244]: I0318 09:56:06.185528 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:06.185783 master-0 kubenswrapper[8244]: I0318 09:56:06.185628 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:06.374140 master-0 kubenswrapper[8244]: I0318 09:56:06.374012 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn6md" event={"ID":"af588cc6-5c57-4fea-a8db-84bf34b647a3","Type":"ContainerStarted","Data":"91ebdefaf6e1db7f6ba006a75e8fa665d272029e470b99c96f6f3bc993072519"} Mar 18 09:56:06.375800 master-0 kubenswrapper[8244]: I0318 09:56:06.375783 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00/installer/0.log" Mar 18 09:56:06.375974 master-0 kubenswrapper[8244]: I0318 09:56:06.375944 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00","Type":"ContainerDied","Data":"03e0c8a2298260aa3a63483fbc7bfb57b4d0366e456b6f98e512ee9a034418aa"} Mar 18 09:56:06.376056 master-0 kubenswrapper[8244]: I0318 09:56:06.375992 8244 scope.go:117] "RemoveContainer" containerID="933b2ad053b9c23c3a2342880b67f40c11f8fa3992eedba2b2625d8844c5e60c" Mar 18 09:56:06.376109 master-0 kubenswrapper[8244]: I0318 09:56:06.376070 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:56:06.392534 master-0 kubenswrapper[8244]: E0318 09:56:06.392466 8244 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:06.544602 master-0 kubenswrapper[8244]: I0318 09:56:06.544543 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:56:06.544877 master-0 kubenswrapper[8244]: I0318 09:56:06.544778 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:56:06.626633 master-0 kubenswrapper[8244]: E0318 09:56:06.626469 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:06.810923 master-0 kubenswrapper[8244]: I0318 09:56:06.810868 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 09:56:07.580347 master-0 kubenswrapper[8244]: I0318 09:56:07.580276 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hn6md" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerName="registry-server" probeResult="failure" output=< Mar 18 09:56:07.580347 master-0 kubenswrapper[8244]: timeout: failed to connect service ":50051" within 1s Mar 18 09:56:07.580347 master-0 kubenswrapper[8244]: > Mar 18 09:56:09.185969 master-0 kubenswrapper[8244]: I0318 09:56:09.185815 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:09.185969 master-0 kubenswrapper[8244]: I0318 09:56:09.185940 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:10.301060 master-0 kubenswrapper[8244]: I0318 09:56:10.300976 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:56:10.301974 master-0 kubenswrapper[8244]: I0318 09:56:10.301103 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:56:10.303955 master-0 kubenswrapper[8244]: E0318 09:56:10.303905 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 09:56:10.368916 master-0 kubenswrapper[8244]: I0318 09:56:10.368787 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:56:10.400798 master-0 kubenswrapper[8244]: I0318 09:56:10.400750 8244 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57" exitCode=0 Mar 18 09:56:10.455297 master-0 kubenswrapper[8244]: I0318 09:56:10.455213 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:56:11.408148 master-0 kubenswrapper[8244]: I0318 09:56:11.408092 8244 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717" exitCode=0 Mar 18 09:56:11.408710 master-0 kubenswrapper[8244]: I0318 09:56:11.408204 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717"} Mar 18 09:56:11.819869 master-0 kubenswrapper[8244]: I0318 09:56:11.819802 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 09:56:11.820141 master-0 kubenswrapper[8244]: I0318 09:56:11.819944 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:56:11.841065 master-0 kubenswrapper[8244]: I0318 09:56:11.841008 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 09:56:11.841169 master-0 kubenswrapper[8244]: I0318 09:56:11.841104 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 09:56:11.841272 master-0 kubenswrapper[8244]: I0318 09:56:11.841203 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir" (OuterVolumeSpecName: "data-dir") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:56:11.841272 master-0 kubenswrapper[8244]: I0318 09:56:11.841244 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs" (OuterVolumeSpecName: "certs") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:56:11.841610 master-0 kubenswrapper[8244]: I0318 09:56:11.841586 8244 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:11.841610 master-0 kubenswrapper[8244]: I0318 09:56:11.841610 8244 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:12.185223 master-0 kubenswrapper[8244]: I0318 09:56:12.185162 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:12.185459 master-0 kubenswrapper[8244]: I0318 09:56:12.185254 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: I0318 09:56:12.236100 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [-]etcd failed: reason withheld Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:12.236167 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:12.236758 master-0 kubenswrapper[8244]: I0318 09:56:12.236174 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:56:12.415723 master-0 kubenswrapper[8244]: I0318 09:56:12.415628 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 09:56:12.416704 master-0 kubenswrapper[8244]: I0318 09:56:12.415735 8244 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5" exitCode=137 Mar 18 09:56:12.416704 master-0 kubenswrapper[8244]: I0318 09:56:12.415842 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:56:12.416704 master-0 kubenswrapper[8244]: I0318 09:56:12.415858 8244 scope.go:117] "RemoveContainer" containerID="f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57" Mar 18 09:56:12.434930 master-0 kubenswrapper[8244]: I0318 09:56:12.434708 8244 scope.go:117] "RemoveContainer" containerID="f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5" Mar 18 09:56:12.452416 master-0 kubenswrapper[8244]: I0318 09:56:12.452247 8244 scope.go:117] "RemoveContainer" containerID="f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57" Mar 18 09:56:12.453031 master-0 kubenswrapper[8244]: E0318 09:56:12.452984 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57\": container with ID starting with f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57 not found: ID does not exist" containerID="f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57" Mar 18 09:56:12.453143 master-0 kubenswrapper[8244]: I0318 09:56:12.453037 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57"} err="failed to get container status \"f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57\": rpc error: code = NotFound desc = could not find container \"f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57\": container with ID starting with f305e60f069ccf418ecdd9a5248eadee88e9d2be3ca57cfd6181d1ab96140c57 not found: ID does not exist" Mar 18 09:56:12.453143 master-0 kubenswrapper[8244]: I0318 09:56:12.453070 8244 scope.go:117] "RemoveContainer" containerID="f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5" Mar 18 09:56:12.453588 master-0 kubenswrapper[8244]: E0318 09:56:12.453560 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5\": container with ID starting with f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5 not found: ID does not exist" containerID="f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5" Mar 18 09:56:12.453669 master-0 kubenswrapper[8244]: I0318 09:56:12.453598 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5"} err="failed to get container status \"f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5\": rpc error: code = NotFound desc = could not find container \"f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5\": container with ID starting with f2607ae8dbefbd514d748ad1e03e092ca2114ce2ce09f7065e579402bdb6a4c5 not found: ID does not exist" Mar 18 09:56:13.498509 master-0 kubenswrapper[8244]: I0318 09:56:13.498396 8244 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-4q9tr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 09:56:13.499517 master-0 kubenswrapper[8244]: I0318 09:56:13.498526 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" podUID="f076eaf0-b041-4db0-ba06-3d85e23bb654" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 09:56:13.744381 master-0 kubenswrapper[8244]: I0318 09:56:13.744319 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d664a6d0d2a24360dee10612610f1b59" path="/var/lib/kubelet/pods/d664a6d0d2a24360dee10612610f1b59/volumes" Mar 18 09:56:13.745288 master-0 kubenswrapper[8244]: I0318 09:56:13.745233 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 09:56:14.462805 master-0 kubenswrapper[8244]: I0318 09:56:14.462702 8244 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:14.779366 master-0 kubenswrapper[8244]: I0318 09:56:14.779222 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 09:56:14.788211 master-0 kubenswrapper[8244]: E0318 09:56:14.788163 8244 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 09:56:14.788330 master-0 kubenswrapper[8244]: E0318 09:56:14.788266 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access podName:54a208d1-afe8-49b5-92e0-e27afb4abc80 nodeName:}" failed. No retries permitted until 2026-03-18 09:56:15.288239081 +0000 UTC m=+91.767975249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access") pod "installer-4-master-0" (UID: "54a208d1-afe8-49b5-92e0-e27afb4abc80") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 09:56:14.820531 master-0 kubenswrapper[8244]: I0318 09:56:14.820459 8244 status_manager.go:907] "Failed to delete status for pod" pod="openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6" err="Timeout: request did not complete within requested timeout - context deadline exceeded" Mar 18 09:56:14.820737 master-0 kubenswrapper[8244]: E0318 09:56:14.820513 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{route-controller-manager-54cf6885f8-xsgcr.189de6ee7542899f openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-54cf6885f8-xsgcr,UID:3a9c36d0-e3f3-441e-bbab-44703a0eeb70,APIVersion:v1,ResourceVersion:7296,FieldPath:spec.containers{route-controller-manager},},Reason:Started,Message:Started container route-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:55:40.097743263 +0000 UTC m=+56.577479431,LastTimestamp:2026-03-18 09:55:40.097743263 +0000 UTC m=+56.577479431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:56:14.878702 master-0 kubenswrapper[8244]: E0318 09:56:14.878637 8244 projected.go:194] Error preparing data for projected volume kube-api-access-958k6 for pod openshift-marketplace/community-operators-lx2pt: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 09:56:14.878899 master-0 kubenswrapper[8244]: E0318 09:56:14.878731 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6 podName:3e884f11-9ace-4ef9-930a-05e170d1bfab nodeName:}" failed. No retries permitted until 2026-03-18 09:56:15.378714418 +0000 UTC m=+91.858450546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-958k6" (UniqueName: "kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6") pod "community-operators-lx2pt" (UID: "3e884f11-9ace-4ef9-930a-05e170d1bfab") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 09:56:15.186036 master-0 kubenswrapper[8244]: I0318 09:56:15.185948 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:15.186268 master-0 kubenswrapper[8244]: I0318 09:56:15.186039 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:15.383321 master-0 kubenswrapper[8244]: I0318 09:56:15.383206 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-958k6\" (UniqueName: \"kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:56:15.383615 master-0 kubenswrapper[8244]: I0318 09:56:15.383427 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:56:16.392946 master-0 kubenswrapper[8244]: E0318 09:56:16.392802 8244 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Mar 18 09:56:16.627015 master-0 kubenswrapper[8244]: E0318 09:56:16.626919 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:18.185710 master-0 kubenswrapper[8244]: I0318 09:56:18.185616 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:18.186345 master-0 kubenswrapper[8244]: I0318 09:56:18.185708 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:18.458008 master-0 kubenswrapper[8244]: I0318 09:56:18.457893 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_5fb70bf3-93cd-4000-be1a-8e21846d5709/installer/0.log" Mar 18 09:56:18.458008 master-0 kubenswrapper[8244]: I0318 09:56:18.457932 8244 generic.go:334] "Generic (PLEG): container finished" podID="5fb70bf3-93cd-4000-be1a-8e21846d5709" containerID="22a0f37f7177929cbf4f5043d36e78b2ea4f84b8562060ced4185a407eb57943" exitCode=1 Mar 18 09:56:21.185378 master-0 kubenswrapper[8244]: I0318 09:56:21.185325 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:21.186190 master-0 kubenswrapper[8244]: I0318 09:56:21.186095 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: I0318 09:56:21.241666 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [-]etcd failed: reason withheld Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:21.241754 master-0 kubenswrapper[8244]: I0318 09:56:21.241746 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:56:22.480513 master-0 kubenswrapper[8244]: I0318 09:56:22.480406 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-8srnz_9ccdc221-4ec5-487e-8ec4-85284ed628d8/network-operator/0.log" Mar 18 09:56:22.480513 master-0 kubenswrapper[8244]: I0318 09:56:22.480513 8244 generic.go:334] "Generic (PLEG): container finished" podID="9ccdc221-4ec5-487e-8ec4-85284ed628d8" containerID="809e75633cdef66e6f08501f6041dd63595d2c3bfee4b8663f566a1c8682596e" exitCode=255 Mar 18 09:56:23.498937 master-0 kubenswrapper[8244]: I0318 09:56:23.498802 8244 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-4q9tr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 09:56:23.498937 master-0 kubenswrapper[8244]: I0318 09:56:23.498917 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" podUID="f076eaf0-b041-4db0-ba06-3d85e23bb654" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 09:56:24.185474 master-0 kubenswrapper[8244]: I0318 09:56:24.185405 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:24.185744 master-0 kubenswrapper[8244]: I0318 09:56:24.185496 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:24.414647 master-0 kubenswrapper[8244]: E0318 09:56:24.414570 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 09:56:24.462157 master-0 kubenswrapper[8244]: I0318 09:56:24.462079 8244 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:25.502633 master-0 kubenswrapper[8244]: I0318 09:56:25.502557 8244 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37" exitCode=0 Mar 18 09:56:26.393627 master-0 kubenswrapper[8244]: E0318 09:56:26.393543 8244 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:26.512708 master-0 kubenswrapper[8244]: I0318 09:56:26.512611 8244 generic.go:334] "Generic (PLEG): container finished" podID="a078565a-6970-4f42-84f4-938f1d637245" containerID="035a83745bfe6ed219f87a31bd7766c9d9b162354f5f4e36d6dc8a6cc1dbc053" exitCode=0 Mar 18 09:56:26.515623 master-0 kubenswrapper[8244]: I0318 09:56:26.515570 8244 generic.go:334] "Generic (PLEG): container finished" podID="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" containerID="5852b37c5e8c94f0baa4c4a1981174d60f6d9f69d3672da3d78ad25102d900a1" exitCode=0 Mar 18 09:56:26.627985 master-0 kubenswrapper[8244]: E0318 09:56:26.627921 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:27.185374 master-0 kubenswrapper[8244]: I0318 09:56:27.185278 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:27.185633 master-0 kubenswrapper[8244]: I0318 09:56:27.185373 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:29.378128 master-0 kubenswrapper[8244]: I0318 09:56:29.378032 8244 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-4tlnm container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 18 09:56:29.378128 master-0 kubenswrapper[8244]: I0318 09:56:29.378116 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" podUID="a078565a-6970-4f42-84f4-938f1d637245" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 18 09:56:30.185854 master-0 kubenswrapper[8244]: I0318 09:56:30.185764 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:30.186072 master-0 kubenswrapper[8244]: I0318 09:56:30.185898 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: I0318 09:56:30.249731 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [-]etcd failed: reason withheld Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:30.256556 master-0 kubenswrapper[8244]: I0318 09:56:30.249813 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:56:30.540678 master-0 kubenswrapper[8244]: I0318 09:56:30.540597 8244 generic.go:334] "Generic (PLEG): container finished" podID="bb35841e-d992-4044-aaaa-06c9faf47bd0" containerID="21ea6abc98e78a0444eb255d9f1edf6ce13e5e0f11a1d4b38c35dd0e5e280fcf" exitCode=0 Mar 18 09:56:30.553524 master-0 kubenswrapper[8244]: I0318 09:56:30.543421 8244 generic.go:334] "Generic (PLEG): container finished" podID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerID="fe475c93acb3e152a06334aa122f61bc3dfe0a7c617c3c6b5b5bc407433dfd76" exitCode=0 Mar 18 09:56:33.185488 master-0 kubenswrapper[8244]: I0318 09:56:33.185422 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:33.186479 master-0 kubenswrapper[8244]: I0318 09:56:33.186152 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:33.499598 master-0 kubenswrapper[8244]: I0318 09:56:33.499462 8244 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-4q9tr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 09:56:33.499949 master-0 kubenswrapper[8244]: I0318 09:56:33.499602 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" podUID="f076eaf0-b041-4db0-ba06-3d85e23bb654" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 09:56:33.565672 master-0 kubenswrapper[8244]: I0318 09:56:33.565594 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7fl4x_bb942756-bac7-414d-b179-cebdce588a13/approver/0.log" Mar 18 09:56:33.566101 master-0 kubenswrapper[8244]: I0318 09:56:33.566059 8244 generic.go:334] "Generic (PLEG): container finished" podID="bb942756-bac7-414d-b179-cebdce588a13" containerID="11b5b6c3c569b883f4e3bfd269fb3345429d4cace9fc05301ab08ee60a18aa95" exitCode=1 Mar 18 09:56:35.580595 master-0 kubenswrapper[8244]: I0318 09:56:35.580533 8244 generic.go:334] "Generic (PLEG): container finished" podID="0999f781-3299-4cb6-ba76-2a4f4584c685" containerID="e5c331496115ef5ceb50ea93103ae754d1d16032e25eefad5a38ee8ba0e6ac68" exitCode=0 Mar 18 09:56:36.185320 master-0 kubenswrapper[8244]: I0318 09:56:36.185196 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:36.185320 master-0 kubenswrapper[8244]: I0318 09:56:36.185296 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:36.394432 master-0 kubenswrapper[8244]: E0318 09:56:36.394324 8244 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:36.394432 master-0 kubenswrapper[8244]: I0318 09:56:36.394408 8244 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 09:56:36.629528 master-0 kubenswrapper[8244]: E0318 09:56:36.629400 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:36.629528 master-0 kubenswrapper[8244]: E0318 09:56:36.629482 8244 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 09:56:38.509161 master-0 kubenswrapper[8244]: E0318 09:56:38.509062 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 09:56:39.185518 master-0 kubenswrapper[8244]: I0318 09:56:39.185429 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:39.185518 master-0 kubenswrapper[8244]: I0318 09:56:39.185506 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: I0318 09:56:39.257193 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [-]etcd failed: reason withheld Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:39.257325 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:39.258529 master-0 kubenswrapper[8244]: I0318 09:56:39.257331 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:56:40.630158 master-0 kubenswrapper[8244]: I0318 09:56:40.630080 8244 generic.go:334] "Generic (PLEG): container finished" podID="f076eaf0-b041-4db0-ba06-3d85e23bb654" containerID="86e19dd48a4220e684cd4591a7ea73d2539f388a0f50f6f6c55feee37bcbb65f" exitCode=0 Mar 18 09:56:41.640476 master-0 kubenswrapper[8244]: I0318 09:56:41.640380 8244 generic.go:334] "Generic (PLEG): container finished" podID="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" containerID="da02ee0de03a088a8c40f809ca8f007d6167a1c499d12f1066049752159499b0" exitCode=0 Mar 18 09:56:41.643658 master-0 kubenswrapper[8244]: I0318 09:56:41.643587 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-f8f5f6bc4-87dt7_54e26470-5ffb-4673-9375-e80031cc6750/controller-manager/0.log" Mar 18 09:56:41.643796 master-0 kubenswrapper[8244]: I0318 09:56:41.643660 8244 generic.go:334] "Generic (PLEG): container finished" podID="54e26470-5ffb-4673-9375-e80031cc6750" containerID="1248d2a0db71d324c2c95a679e324dd57a6ddd00508bb65cb77279b8a3a015b8" exitCode=255 Mar 18 09:56:42.185475 master-0 kubenswrapper[8244]: I0318 09:56:42.185387 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:42.185740 master-0 kubenswrapper[8244]: I0318 09:56:42.185476 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:45.185146 master-0 kubenswrapper[8244]: I0318 09:56:45.185068 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:45.186141 master-0 kubenswrapper[8244]: I0318 09:56:45.185163 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:46.395111 master-0 kubenswrapper[8244]: E0318 09:56:46.394972 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="200ms" Mar 18 09:56:46.683507 master-0 kubenswrapper[8244]: I0318 09:56:46.683337 8244 generic.go:334] "Generic (PLEG): container finished" podID="6a6a616d-012a-479e-ab3d-b21295ea1805" containerID="baecef73d93e3ca9ff934b2e1c379d4ea8c4c91e3cae11e23b740ee52145d967" exitCode=0 Mar 18 09:56:47.109498 master-0 kubenswrapper[8244]: I0318 09:56:47.109417 8244 patch_prober.go:28] interesting pod/controller-manager-f8f5f6bc4-87dt7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 18 09:56:47.109498 master-0 kubenswrapper[8244]: I0318 09:56:47.109474 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 18 09:56:47.109937 master-0 kubenswrapper[8244]: I0318 09:56:47.109915 8244 patch_prober.go:28] interesting pod/controller-manager-f8f5f6bc4-87dt7 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 18 09:56:47.109991 master-0 kubenswrapper[8244]: I0318 09:56:47.109939 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 18 09:56:47.748853 master-0 kubenswrapper[8244]: E0318 09:56:47.748737 8244 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 09:56:47.749607 master-0 kubenswrapper[8244]: E0318 09:56:47.748976 8244 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Mar 18 09:56:47.749607 master-0 kubenswrapper[8244]: I0318 09:56:47.749007 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:56:47.749607 master-0 kubenswrapper[8244]: I0318 09:56:47.749085 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:56:47.757384 master-0 kubenswrapper[8244]: I0318 09:56:47.757302 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 09:56:48.185869 master-0 kubenswrapper[8244]: I0318 09:56:48.185695 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:48.185869 master-0 kubenswrapper[8244]: I0318 09:56:48.185791 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: I0318 09:56:48.265760 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [-]etcd failed: reason withheld Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:48.265817 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:48.267335 master-0 kubenswrapper[8244]: I0318 09:56:48.267274 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: E0318 09:56:48.823680 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: &Event{ObjectMeta:{apiserver-687747fbb4-k7dnf.189de6ee767c3391 openshift-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-apiserver,Name:apiserver-687747fbb4-k7dnf,UID:0c7b317c-d141-4e69-9c82-4a5dda6c3248,APIVersion:v1,ResourceVersion:6190,FieldPath:spec.containers{openshift-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: body: [+]ping ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]etcd ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:55:40.118299537 +0000 UTC m=+56.598035675,LastTimestamp:2026-03-18 09:55:40.118299537 +0000 UTC m=+56.598035675,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 18 09:56:48.823846 master-0 kubenswrapper[8244]: > Mar 18 09:56:49.387137 master-0 kubenswrapper[8244]: E0318 09:56:49.387068 8244 projected.go:194] Error preparing data for projected volume kube-api-access-958k6 for pod openshift-marketplace/community-operators-lx2pt: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 09:56:49.387429 master-0 kubenswrapper[8244]: E0318 09:56:49.387177 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6 podName:3e884f11-9ace-4ef9-930a-05e170d1bfab nodeName:}" failed. No retries permitted until 2026-03-18 09:56:50.387146593 +0000 UTC m=+126.866882761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-958k6" (UniqueName: "kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6") pod "community-operators-lx2pt" (UID: "3e884f11-9ace-4ef9-930a-05e170d1bfab") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 09:56:49.387429 master-0 kubenswrapper[8244]: E0318 09:56:49.387334 8244 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 09:56:49.387576 master-0 kubenswrapper[8244]: E0318 09:56:49.387441 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access podName:54a208d1-afe8-49b5-92e0-e27afb4abc80 nodeName:}" failed. No retries permitted until 2026-03-18 09:56:50.38741589 +0000 UTC m=+126.867152118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access") pod "installer-4-master-0" (UID: "54a208d1-afe8-49b5-92e0-e27afb4abc80") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 09:56:50.458438 master-0 kubenswrapper[8244]: I0318 09:56:50.458359 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:56:50.459459 master-0 kubenswrapper[8244]: I0318 09:56:50.459065 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-958k6\" (UniqueName: \"kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:56:51.186137 master-0 kubenswrapper[8244]: I0318 09:56:51.186054 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:51.186411 master-0 kubenswrapper[8244]: I0318 09:56:51.186212 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: I0318 09:56:51.831653 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]etcd ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:51.831728 master-0 kubenswrapper[8244]: I0318 09:56:51.831722 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: I0318 09:56:51.838866 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]etcd ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:51.838944 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:51.840007 master-0 kubenswrapper[8244]: I0318 09:56:51.838966 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:56:54.185626 master-0 kubenswrapper[8244]: I0318 09:56:54.185561 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:54.186405 master-0 kubenswrapper[8244]: I0318 09:56:54.185660 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: I0318 09:56:55.098744 8244 patch_prober.go:28] interesting pod/apiserver-687747fbb4-k7dnf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]log ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]etcd ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-startinformers ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:56:55.098852 master-0 kubenswrapper[8244]: livez check failed Mar 18 09:56:55.099917 master-0 kubenswrapper[8244]: I0318 09:56:55.098884 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podUID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:56:55.439187 master-0 kubenswrapper[8244]: E0318 09:56:55.439048 8244 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.69s" Mar 18 09:56:55.439187 master-0 kubenswrapper[8244]: I0318 09:56:55.439141 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:56:55.446039 master-0 kubenswrapper[8244]: I0318 09:56:55.445984 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-958k6\" (UniqueName: \"kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6\") pod \"community-operators-lx2pt\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:56:55.451872 master-0 kubenswrapper[8244]: I0318 09:56:55.451784 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access\") pod \"installer-4-master-0\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:56:55.452062 master-0 kubenswrapper[8244]: I0318 09:56:55.451951 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 09:56:55.455406 master-0 kubenswrapper[8244]: I0318 09:56:55.455267 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"5fb70bf3-93cd-4000-be1a-8e21846d5709","Type":"ContainerDied","Data":"22a0f37f7177929cbf4f5043d36e78b2ea4f84b8562060ced4185a407eb57943"} Mar 18 09:56:55.455711 master-0 kubenswrapper[8244]: I0318 09:56:55.455665 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:56:55.458517 master-0 kubenswrapper[8244]: I0318 09:56:55.458453 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:56:55.458695 master-0 kubenswrapper[8244]: I0318 09:56:55.458527 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:56:55.458695 master-0 kubenswrapper[8244]: I0318 09:56:55.458556 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:56:55.458695 master-0 kubenswrapper[8244]: I0318 09:56:55.458582 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" event={"ID":"9ccdc221-4ec5-487e-8ec4-85284ed628d8","Type":"ContainerDied","Data":"809e75633cdef66e6f08501f6041dd63595d2c3bfee4b8663f566a1c8682596e"} Mar 18 09:56:55.458695 master-0 kubenswrapper[8244]: I0318 09:56:55.458620 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37"} Mar 18 09:56:55.458695 master-0 kubenswrapper[8244]: I0318 09:56:55.458654 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" event={"ID":"a078565a-6970-4f42-84f4-938f1d637245","Type":"ContainerDied","Data":"035a83745bfe6ed219f87a31bd7766c9d9b162354f5f4e36d6dc8a6cc1dbc053"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.458696 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" event={"ID":"ec53d7fa-445b-4e1d-84ef-545f08e80ccc","Type":"ContainerDied","Data":"5852b37c5e8c94f0baa4c4a1981174d60f6d9f69d3672da3d78ad25102d900a1"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.458744 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" event={"ID":"bb35841e-d992-4044-aaaa-06c9faf47bd0","Type":"ContainerDied","Data":"21ea6abc98e78a0444eb255d9f1edf6ce13e5e0f11a1d4b38c35dd0e5e280fcf"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.458799 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerDied","Data":"fe475c93acb3e152a06334aa122f61bc3dfe0a7c617c3c6b5b5bc407433dfd76"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.458890 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerStarted","Data":"7d07e8c06ddf9d3c29ebaf294b7a205901752e302793187eb4f8dcbb44b41fc0"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.458912 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7fl4x" event={"ID":"bb942756-bac7-414d-b179-cebdce588a13","Type":"ContainerDied","Data":"11b5b6c3c569b883f4e3bfd269fb3345429d4cace9fc05301ab08ee60a18aa95"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.458936 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" event={"ID":"0999f781-3299-4cb6-ba76-2a4f4584c685","Type":"ContainerDied","Data":"e5c331496115ef5ceb50ea93103ae754d1d16032e25eefad5a38ee8ba0e6ac68"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.458960 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.458980 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.458998 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.459016 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.459036 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.459054 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" event={"ID":"f076eaf0-b041-4db0-ba06-3d85e23bb654","Type":"ContainerDied","Data":"86e19dd48a4220e684cd4591a7ea73d2539f388a0f50f6f6c55feee37bcbb65f"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.459119 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" event={"ID":"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6","Type":"ContainerDied","Data":"da02ee0de03a088a8c40f809ca8f007d6167a1c499d12f1066049752159499b0"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.459158 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" event={"ID":"54e26470-5ffb-4673-9375-e80031cc6750","Type":"ContainerDied","Data":"1248d2a0db71d324c2c95a679e324dd57a6ddd00508bb65cb77279b8a3a015b8"} Mar 18 09:56:55.459241 master-0 kubenswrapper[8244]: I0318 09:56:55.459183 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" event={"ID":"6a6a616d-012a-479e-ab3d-b21295ea1805","Type":"ContainerDied","Data":"baecef73d93e3ca9ff934b2e1c379d4ea8c4c91e3cae11e23b740ee52145d967"} Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.459360 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.459413 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.459536 8244 scope.go:117] "RemoveContainer" containerID="21ea6abc98e78a0444eb255d9f1edf6ce13e5e0f11a1d4b38c35dd0e5e280fcf" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.459634 8244 scope.go:117] "RemoveContainer" containerID="baecef73d93e3ca9ff934b2e1c379d4ea8c4c91e3cae11e23b740ee52145d967" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.460338 8244 scope.go:117] "RemoveContainer" containerID="da02ee0de03a088a8c40f809ca8f007d6167a1c499d12f1066049752159499b0" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.460674 8244 scope.go:117] "RemoveContainer" containerID="5852b37c5e8c94f0baa4c4a1981174d60f6d9f69d3672da3d78ad25102d900a1" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.460907 8244 scope.go:117] "RemoveContainer" containerID="809e75633cdef66e6f08501f6041dd63595d2c3bfee4b8663f566a1c8682596e" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.461285 8244 scope.go:117] "RemoveContainer" containerID="86e19dd48a4220e684cd4591a7ea73d2539f388a0f50f6f6c55feee37bcbb65f" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.461451 8244 scope.go:117] "RemoveContainer" containerID="1248d2a0db71d324c2c95a679e324dd57a6ddd00508bb65cb77279b8a3a015b8" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.461791 8244 scope.go:117] "RemoveContainer" containerID="11b5b6c3c569b883f4e3bfd269fb3345429d4cace9fc05301ab08ee60a18aa95" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.462512 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"25e8b4ad00ce2bdd7986e5a3dbebb908681f21787c999f9ac28c5b382c85fc69"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.462641 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://25e8b4ad00ce2bdd7986e5a3dbebb908681f21787c999f9ac28c5b382c85fc69" gracePeriod=30 Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.463608 8244 scope.go:117] "RemoveContainer" containerID="e5c331496115ef5ceb50ea93103ae754d1d16032e25eefad5a38ee8ba0e6ac68" Mar 18 09:56:55.469237 master-0 kubenswrapper[8244]: I0318 09:56:55.463673 8244 scope.go:117] "RemoveContainer" containerID="035a83745bfe6ed219f87a31bd7766c9d9b162354f5f4e36d6dc8a6cc1dbc053" Mar 18 09:56:55.490209 master-0 kubenswrapper[8244]: I0318 09:56:55.480267 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 09:56:55.490209 master-0 kubenswrapper[8244]: I0318 09:56:55.480310 8244 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="09456b6d-4d70-40a2-bb12-158141ec842b" Mar 18 09:56:55.494228 master-0 kubenswrapper[8244]: I0318 09:56:55.493663 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 09:56:55.494228 master-0 kubenswrapper[8244]: I0318 09:56:55.493727 8244 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="09456b6d-4d70-40a2-bb12-158141ec842b" Mar 18 09:56:55.500063 master-0 kubenswrapper[8244]: I0318 09:56:55.500009 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6"] Mar 18 09:56:55.503093 master-0 kubenswrapper[8244]: I0318 09:56:55.501960 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-664bd974c9-7w9f6"] Mar 18 09:56:55.515947 master-0 kubenswrapper[8244]: I0318 09:56:55.515758 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2n6d2" podStartSLOduration=57.185512031 podStartE2EDuration="1m19.515738126s" podCreationTimestamp="2026-03-18 09:55:36 +0000 UTC" firstStartedPulling="2026-03-18 09:55:41.789369636 +0000 UTC m=+58.269105764" lastFinishedPulling="2026-03-18 09:56:04.119595691 +0000 UTC m=+80.599331859" observedRunningTime="2026-03-18 09:56:55.509698287 +0000 UTC m=+131.989434435" watchObservedRunningTime="2026-03-18 09:56:55.515738126 +0000 UTC m=+131.995474284" Mar 18 09:56:55.517633 master-0 kubenswrapper[8244]: I0318 09:56:55.517595 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:56:55.528755 master-0 kubenswrapper[8244]: I0318 09:56:55.527633 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:56:55.620640 master-0 kubenswrapper[8244]: I0318 09:56:55.620552 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" podStartSLOduration=92.037767189 podStartE2EDuration="1m49.620529839s" podCreationTimestamp="2026-03-18 09:55:06 +0000 UTC" firstStartedPulling="2026-03-18 09:55:14.315709075 +0000 UTC m=+30.795445213" lastFinishedPulling="2026-03-18 09:55:31.898471735 +0000 UTC m=+48.378207863" observedRunningTime="2026-03-18 09:56:55.618232092 +0000 UTC m=+132.097968220" watchObservedRunningTime="2026-03-18 09:56:55.620529839 +0000 UTC m=+132.100265967" Mar 18 09:56:55.657203 master-0 kubenswrapper[8244]: I0318 09:56:55.657136 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4s2km" podStartSLOduration=55.235508882 podStartE2EDuration="1m22.657121134s" podCreationTimestamp="2026-03-18 09:55:33 +0000 UTC" firstStartedPulling="2026-03-18 09:55:36.694926896 +0000 UTC m=+53.174663034" lastFinishedPulling="2026-03-18 09:56:04.116539118 +0000 UTC m=+80.596275286" observedRunningTime="2026-03-18 09:56:55.655945195 +0000 UTC m=+132.135681323" watchObservedRunningTime="2026-03-18 09:56:55.657121134 +0000 UTC m=+132.136857262" Mar 18 09:56:55.678486 master-0 kubenswrapper[8244]: I0318 09:56:55.672187 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=78.672171726 podStartE2EDuration="1m18.672171726s" podCreationTimestamp="2026-03-18 09:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:56:55.669416618 +0000 UTC m=+132.149152746" watchObservedRunningTime="2026-03-18 09:56:55.672171726 +0000 UTC m=+132.151907854" Mar 18 09:56:55.716813 master-0 kubenswrapper[8244]: I0318 09:56:55.716772 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:56:55.785756 master-0 kubenswrapper[8244]: I0318 09:56:55.785667 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hn6md" podStartSLOduration=57.379707796 podStartE2EDuration="1m21.785646583s" podCreationTimestamp="2026-03-18 09:55:34 +0000 UTC" firstStartedPulling="2026-03-18 09:55:39.723131482 +0000 UTC m=+56.202867610" lastFinishedPulling="2026-03-18 09:56:04.129070259 +0000 UTC m=+80.608806397" observedRunningTime="2026-03-18 09:56:55.783996532 +0000 UTC m=+132.263732670" watchObservedRunningTime="2026-03-18 09:56:55.785646583 +0000 UTC m=+132.265382711" Mar 18 09:56:55.815040 master-0 kubenswrapper[8244]: I0318 09:56:55.811593 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f251e1b-0f5d-460f-8152-c9201dba0cff" path="/var/lib/kubelet/pods/9f251e1b-0f5d-460f-8152-c9201dba0cff/volumes" Mar 18 09:56:55.815970 master-0 kubenswrapper[8244]: I0318 09:56:55.815166 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 09:56:55.815970 master-0 kubenswrapper[8244]: I0318 09:56:55.815198 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 09:56:55.826678 master-0 kubenswrapper[8244]: I0318 09:56:55.826617 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-f8f5f6bc4-87dt7_54e26470-5ffb-4673-9375-e80031cc6750/controller-manager/0.log" Mar 18 09:56:55.826935 master-0 kubenswrapper[8244]: I0318 09:56:55.826731 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" event={"ID":"54e26470-5ffb-4673-9375-e80031cc6750","Type":"ContainerStarted","Data":"a3602e50826c30fb0a6aafc5be0e48c4b539e69bcb2efce748d1524de14ad2a2"} Mar 18 09:56:55.827698 master-0 kubenswrapper[8244]: I0318 09:56:55.827669 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:56:55.827787 master-0 kubenswrapper[8244]: I0318 09:56:55.827744 8244 patch_prober.go:28] interesting pod/controller-manager-f8f5f6bc4-87dt7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 18 09:56:55.827787 master-0 kubenswrapper[8244]: I0318 09:56:55.827776 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 18 09:56:55.840518 master-0 kubenswrapper[8244]: I0318 09:56:55.840459 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 09:56:55.854484 master-0 kubenswrapper[8244]: I0318 09:56:55.854428 8244 generic.go:334] "Generic (PLEG): container finished" podID="0d72e695-0183-4ee8-8add-5425e67f7138" containerID="756a2f4fb3414c500a82e436fbad8cd30da785b7959d7459fc20c6af350a8060" exitCode=0 Mar 18 09:56:55.854726 master-0 kubenswrapper[8244]: I0318 09:56:55.854552 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" event={"ID":"0d72e695-0183-4ee8-8add-5425e67f7138","Type":"ContainerDied","Data":"756a2f4fb3414c500a82e436fbad8cd30da785b7959d7459fc20c6af350a8060"} Mar 18 09:56:55.855065 master-0 kubenswrapper[8244]: I0318 09:56:55.855039 8244 scope.go:117] "RemoveContainer" containerID="756a2f4fb3414c500a82e436fbad8cd30da785b7959d7459fc20c6af350a8060" Mar 18 09:56:55.861420 master-0 kubenswrapper[8244]: I0318 09:56:55.861374 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 09:56:55.895705 master-0 kubenswrapper[8244]: I0318 09:56:55.895638 8244 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="25e8b4ad00ce2bdd7986e5a3dbebb908681f21787c999f9ac28c5b382c85fc69" exitCode=2 Mar 18 09:56:55.895907 master-0 kubenswrapper[8244]: I0318 09:56:55.895745 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"25e8b4ad00ce2bdd7986e5a3dbebb908681f21787c999f9ac28c5b382c85fc69"} Mar 18 09:56:55.895907 master-0 kubenswrapper[8244]: I0318 09:56:55.895796 8244 scope.go:117] "RemoveContainer" containerID="0f4ef82cd98a641ac2372a9202df576de9d16287dc2775cc6c0529b93f52b3e6" Mar 18 09:56:55.903308 master-0 kubenswrapper[8244]: I0318 09:56:55.902092 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" event={"ID":"f076eaf0-b041-4db0-ba06-3d85e23bb654","Type":"ContainerStarted","Data":"7899027579e9cd9f7fcc12484390d733833facf13d02a5193e75c23ee942e285"} Mar 18 09:56:55.911601 master-0 kubenswrapper[8244]: I0318 09:56:55.910163 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" event={"ID":"bb35841e-d992-4044-aaaa-06c9faf47bd0","Type":"ContainerStarted","Data":"76f59e21155c1d71669d55451f86d8b5a3fe790b476c844c6bc57c22a2e68f76"} Mar 18 09:56:55.953047 master-0 kubenswrapper[8244]: I0318 09:56:55.952937 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 09:56:55.965598 master-0 kubenswrapper[8244]: I0318 09:56:55.965503 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 09:56:55.996090 master-0 kubenswrapper[8244]: I0318 09:56:55.993965 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" podStartSLOduration=94.993945146 podStartE2EDuration="1m34.993945146s" podCreationTimestamp="2026-03-18 09:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:56:55.992526381 +0000 UTC m=+132.472262509" watchObservedRunningTime="2026-03-18 09:56:55.993945146 +0000 UTC m=+132.473681274" Mar 18 09:56:56.043441 master-0 kubenswrapper[8244]: I0318 09:56:56.041783 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" podStartSLOduration=94.041768469 podStartE2EDuration="1m34.041768469s" podCreationTimestamp="2026-03-18 09:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:56:56.040378534 +0000 UTC m=+132.520114682" watchObservedRunningTime="2026-03-18 09:56:56.041768469 +0000 UTC m=+132.521504597" Mar 18 09:56:56.157763 master-0 kubenswrapper[8244]: I0318 09:56:56.155460 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lx2pt"] Mar 18 09:56:56.322406 master-0 kubenswrapper[8244]: I0318 09:56:56.320719 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_5fb70bf3-93cd-4000-be1a-8e21846d5709/installer/0.log" Mar 18 09:56:56.322406 master-0 kubenswrapper[8244]: I0318 09:56:56.320789 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:56:56.357896 master-0 kubenswrapper[8244]: I0318 09:56:56.357848 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-kubelet-dir\") pod \"5fb70bf3-93cd-4000-be1a-8e21846d5709\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " Mar 18 09:56:56.357994 master-0 kubenswrapper[8244]: I0318 09:56:56.357901 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-var-lock\") pod \"5fb70bf3-93cd-4000-be1a-8e21846d5709\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " Mar 18 09:56:56.357994 master-0 kubenswrapper[8244]: I0318 09:56:56.357933 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fb70bf3-93cd-4000-be1a-8e21846d5709-kube-api-access\") pod \"5fb70bf3-93cd-4000-be1a-8e21846d5709\" (UID: \"5fb70bf3-93cd-4000-be1a-8e21846d5709\") " Mar 18 09:56:56.358349 master-0 kubenswrapper[8244]: I0318 09:56:56.358320 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5fb70bf3-93cd-4000-be1a-8e21846d5709" (UID: "5fb70bf3-93cd-4000-be1a-8e21846d5709"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:56:56.358391 master-0 kubenswrapper[8244]: I0318 09:56:56.358360 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-var-lock" (OuterVolumeSpecName: "var-lock") pod "5fb70bf3-93cd-4000-be1a-8e21846d5709" (UID: "5fb70bf3-93cd-4000-be1a-8e21846d5709"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:56:56.368650 master-0 kubenswrapper[8244]: I0318 09:56:56.368591 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb70bf3-93cd-4000-be1a-8e21846d5709-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5fb70bf3-93cd-4000-be1a-8e21846d5709" (UID: "5fb70bf3-93cd-4000-be1a-8e21846d5709"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:56:56.406882 master-0 kubenswrapper[8244]: I0318 09:56:56.406681 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 09:56:56.459365 master-0 kubenswrapper[8244]: I0318 09:56:56.459318 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fb70bf3-93cd-4000-be1a-8e21846d5709-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:56.459365 master-0 kubenswrapper[8244]: I0318 09:56:56.459350 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:56.459365 master-0 kubenswrapper[8244]: I0318 09:56:56.459359 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5fb70bf3-93cd-4000-be1a-8e21846d5709-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:56.695946 master-0 kubenswrapper[8244]: I0318 09:56:56.691051 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:56:56.695946 master-0 kubenswrapper[8244]: I0318 09:56:56.691107 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:56.927493 master-0 kubenswrapper[8244]: I0318 09:56:56.927390 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-8srnz_9ccdc221-4ec5-487e-8ec4-85284ed628d8/network-operator/0.log" Mar 18 09:56:56.927493 master-0 kubenswrapper[8244]: I0318 09:56:56.927475 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" event={"ID":"9ccdc221-4ec5-487e-8ec4-85284ed628d8","Type":"ContainerStarted","Data":"b5bf205c4d2d39a65c5f434aca2db07e6f6c44b756c420c12726c015f7a4b2e6"} Mar 18 09:56:56.932454 master-0 kubenswrapper[8244]: I0318 09:56:56.932415 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" event={"ID":"ec53d7fa-445b-4e1d-84ef-545f08e80ccc","Type":"ContainerStarted","Data":"100b826fb47409f3adda82931968130591dc6b1e7420f5ccfd2ef57c6281504c"} Mar 18 09:56:56.941680 master-0 kubenswrapper[8244]: I0318 09:56:56.940106 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" event={"ID":"0d72e695-0183-4ee8-8add-5425e67f7138","Type":"ContainerStarted","Data":"d7fed381f588321bf949c1ee4979e243946541c605dea6e2da6f26ae56dbca2b"} Mar 18 09:56:56.957866 master-0 kubenswrapper[8244]: I0318 09:56:56.956945 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"54a208d1-afe8-49b5-92e0-e27afb4abc80","Type":"ContainerStarted","Data":"d65f913e3d46ba5408795bb9c468d0294b6c4c00a07a18a41204ec7233a6d96b"} Mar 18 09:56:56.957866 master-0 kubenswrapper[8244]: I0318 09:56:56.956973 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"54a208d1-afe8-49b5-92e0-e27afb4abc80","Type":"ContainerStarted","Data":"412e9b55f8faac02229faa1064ae91e5d24b587483498fa55a3224e6f756199c"} Mar 18 09:56:56.965161 master-0 kubenswrapper[8244]: I0318 09:56:56.965111 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" event={"ID":"a078565a-6970-4f42-84f4-938f1d637245","Type":"ContainerStarted","Data":"ff998e161f24e27e62ffb41d5f1af2c4149f9709b9260bb197fe3f8937665152"} Mar 18 09:56:56.967201 master-0 kubenswrapper[8244]: I0318 09:56:56.967178 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"591676dbc66d002b41b1524c76dbc54235b7dd32e488240aec01e853c0930dc0"} Mar 18 09:56:56.979434 master-0 kubenswrapper[8244]: I0318 09:56:56.979271 8244 generic.go:334] "Generic (PLEG): container finished" podID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerID="1cbdc4e76d1d07790af02f84bca996c202797edf9bebfc3cedebf4576f85e31c" exitCode=0 Mar 18 09:56:56.979643 master-0 kubenswrapper[8244]: I0318 09:56:56.979522 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lx2pt" event={"ID":"3e884f11-9ace-4ef9-930a-05e170d1bfab","Type":"ContainerDied","Data":"1cbdc4e76d1d07790af02f84bca996c202797edf9bebfc3cedebf4576f85e31c"} Mar 18 09:56:56.979643 master-0 kubenswrapper[8244]: I0318 09:56:56.979600 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lx2pt" event={"ID":"3e884f11-9ace-4ef9-930a-05e170d1bfab","Type":"ContainerStarted","Data":"92ea30e6b1acf0370980e9217d92b6832726a8cf9403f31798128c84642185d7"} Mar 18 09:56:56.985848 master-0 kubenswrapper[8244]: I0318 09:56:56.985624 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" event={"ID":"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6","Type":"ContainerStarted","Data":"ece038fe79c27be1029079683dfa33a1fa90e9515d0fac47aae2ee51f3ffd2df"} Mar 18 09:56:56.997515 master-0 kubenswrapper[8244]: I0318 09:56:56.997447 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" event={"ID":"0999f781-3299-4cb6-ba76-2a4f4584c685","Type":"ContainerStarted","Data":"bd5fe04a9ede0b84f18ed45bdc7555eb6593622c877cdf75babe4d3ead617eed"} Mar 18 09:56:56.998848 master-0 kubenswrapper[8244]: I0318 09:56:56.998204 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=76.998193008 podStartE2EDuration="1m16.998193008s" podCreationTimestamp="2026-03-18 09:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:56:56.997348247 +0000 UTC m=+133.477084375" watchObservedRunningTime="2026-03-18 09:56:56.998193008 +0000 UTC m=+133.477929136" Mar 18 09:56:57.006878 master-0 kubenswrapper[8244]: I0318 09:56:57.003866 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" event={"ID":"6a6a616d-012a-479e-ab3d-b21295ea1805","Type":"ContainerStarted","Data":"81cd35f002f1f429688cbe007f6618850051907823664181496568b308ab47bb"} Mar 18 09:56:57.009173 master-0 kubenswrapper[8244]: I0318 09:56:57.008809 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_5fb70bf3-93cd-4000-be1a-8e21846d5709/installer/0.log" Mar 18 09:56:57.009173 master-0 kubenswrapper[8244]: I0318 09:56:57.008920 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"5fb70bf3-93cd-4000-be1a-8e21846d5709","Type":"ContainerDied","Data":"1e692e8ac748487a3686bf48bba0af89ab5710b4a4e9840c96ef2c14535ec26e"} Mar 18 09:56:57.009173 master-0 kubenswrapper[8244]: I0318 09:56:57.008946 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e692e8ac748487a3686bf48bba0af89ab5710b4a4e9840c96ef2c14535ec26e" Mar 18 09:56:57.009173 master-0 kubenswrapper[8244]: I0318 09:56:57.009040 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:56:57.011527 master-0 kubenswrapper[8244]: I0318 09:56:57.011487 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7fl4x_bb942756-bac7-414d-b179-cebdce588a13/approver/0.log" Mar 18 09:56:57.012953 master-0 kubenswrapper[8244]: I0318 09:56:57.012529 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7fl4x" event={"ID":"bb942756-bac7-414d-b179-cebdce588a13","Type":"ContainerStarted","Data":"8009f4f9bf68efb70bfa7b66731f5e2be25cbb5d97d4aeafc6a4a27c0d88d49e"} Mar 18 09:56:57.018867 master-0 kubenswrapper[8244]: I0318 09:56:57.018562 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 09:56:57.185199 master-0 kubenswrapper[8244]: I0318 09:56:57.185155 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:56:57.332105 master-0 kubenswrapper[8244]: I0318 09:56:57.332058 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:56:57.741464 master-0 kubenswrapper[8244]: I0318 09:56:57.741386 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00" path="/var/lib/kubelet/pods/1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00/volumes" Mar 18 09:56:57.742138 master-0 kubenswrapper[8244]: I0318 09:56:57.742087 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b86644b-ddbd-4b14-b82d-b7d614f7f81e" path="/var/lib/kubelet/pods/2b86644b-ddbd-4b14-b82d-b7d614f7f81e/volumes" Mar 18 09:56:57.742813 master-0 kubenswrapper[8244]: I0318 09:56:57.742757 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9dd029-9f8c-4f55-806b-e08ecd088607" path="/var/lib/kubelet/pods/ef9dd029-9f8c-4f55-806b-e08ecd088607/volumes" Mar 18 09:56:58.185662 master-0 kubenswrapper[8244]: I0318 09:56:58.185541 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:56:58.185662 master-0 kubenswrapper[8244]: I0318 09:56:58.185606 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:58.924661 master-0 kubenswrapper[8244]: I0318 09:56:58.922742 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 09:56:59.031984 master-0 kubenswrapper[8244]: I0318 09:56:59.031929 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2km"] Mar 18 09:56:59.032829 master-0 kubenswrapper[8244]: I0318 09:56:59.032779 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4s2km" podUID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerName="registry-server" containerID="cri-o://5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1" gracePeriod=2 Mar 18 09:56:59.186511 master-0 kubenswrapper[8244]: I0318 09:56:59.186388 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:56:59.186511 master-0 kubenswrapper[8244]: I0318 09:56:59.186458 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:59.234601 master-0 kubenswrapper[8244]: I0318 09:56:59.234561 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hn6md"] Mar 18 09:56:59.234959 master-0 kubenswrapper[8244]: I0318 09:56:59.234929 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hn6md" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerName="registry-server" containerID="cri-o://91ebdefaf6e1db7f6ba006a75e8fa665d272029e470b99c96f6f3bc993072519" gracePeriod=2 Mar 18 09:56:59.452619 master-0 kubenswrapper[8244]: I0318 09:56:59.452591 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:56:59.611790 master-0 kubenswrapper[8244]: I0318 09:56:59.611709 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8zbz\" (UniqueName: \"kubernetes.io/projected/2a4c7d0e-10a1-44d1-8874-8e2a76753106-kube-api-access-k8zbz\") pod \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " Mar 18 09:56:59.611790 master-0 kubenswrapper[8244]: I0318 09:56:59.611767 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-catalog-content\") pod \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " Mar 18 09:56:59.612160 master-0 kubenswrapper[8244]: I0318 09:56:59.611817 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-utilities\") pod \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\" (UID: \"2a4c7d0e-10a1-44d1-8874-8e2a76753106\") " Mar 18 09:56:59.614149 master-0 kubenswrapper[8244]: I0318 09:56:59.614109 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-utilities" (OuterVolumeSpecName: "utilities") pod "2a4c7d0e-10a1-44d1-8874-8e2a76753106" (UID: "2a4c7d0e-10a1-44d1-8874-8e2a76753106"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:56:59.614749 master-0 kubenswrapper[8244]: I0318 09:56:59.614695 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4c7d0e-10a1-44d1-8874-8e2a76753106-kube-api-access-k8zbz" (OuterVolumeSpecName: "kube-api-access-k8zbz") pod "2a4c7d0e-10a1-44d1-8874-8e2a76753106" (UID: "2a4c7d0e-10a1-44d1-8874-8e2a76753106"). InnerVolumeSpecName "kube-api-access-k8zbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:56:59.659567 master-0 kubenswrapper[8244]: I0318 09:56:59.659206 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a4c7d0e-10a1-44d1-8874-8e2a76753106" (UID: "2a4c7d0e-10a1-44d1-8874-8e2a76753106"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:56:59.689437 master-0 kubenswrapper[8244]: I0318 09:56:59.689357 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:56:59.689643 master-0 kubenswrapper[8244]: I0318 09:56:59.689445 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:56:59.712801 master-0 kubenswrapper[8244]: I0318 09:56:59.712711 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8zbz\" (UniqueName: \"kubernetes.io/projected/2a4c7d0e-10a1-44d1-8874-8e2a76753106-kube-api-access-k8zbz\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:59.712801 master-0 kubenswrapper[8244]: I0318 09:56:59.712753 8244 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 09:56:59.712801 master-0 kubenswrapper[8244]: I0318 09:56:59.712768 8244 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4c7d0e-10a1-44d1-8874-8e2a76753106-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:00.032519 master-0 kubenswrapper[8244]: I0318 09:57:00.032372 8244 generic.go:334] "Generic (PLEG): container finished" podID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerID="5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1" exitCode=0 Mar 18 09:57:00.032519 master-0 kubenswrapper[8244]: I0318 09:57:00.032436 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2km" Mar 18 09:57:00.032519 master-0 kubenswrapper[8244]: I0318 09:57:00.032465 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2km" event={"ID":"2a4c7d0e-10a1-44d1-8874-8e2a76753106","Type":"ContainerDied","Data":"5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1"} Mar 18 09:57:00.032519 master-0 kubenswrapper[8244]: I0318 09:57:00.032510 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2km" event={"ID":"2a4c7d0e-10a1-44d1-8874-8e2a76753106","Type":"ContainerDied","Data":"08783743f52be89af4082b555c9edcdac7a39fe043de87c8d2e069b82ff73c86"} Mar 18 09:57:00.033297 master-0 kubenswrapper[8244]: I0318 09:57:00.032534 8244 scope.go:117] "RemoveContainer" containerID="5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1" Mar 18 09:57:00.034931 master-0 kubenswrapper[8244]: I0318 09:57:00.034896 8244 generic.go:334] "Generic (PLEG): container finished" podID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerID="91ebdefaf6e1db7f6ba006a75e8fa665d272029e470b99c96f6f3bc993072519" exitCode=0 Mar 18 09:57:00.034931 master-0 kubenswrapper[8244]: I0318 09:57:00.034928 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn6md" event={"ID":"af588cc6-5c57-4fea-a8db-84bf34b647a3","Type":"ContainerDied","Data":"91ebdefaf6e1db7f6ba006a75e8fa665d272029e470b99c96f6f3bc993072519"} Mar 18 09:57:00.202921 master-0 kubenswrapper[8244]: I0318 09:57:00.202879 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:57:00.214420 master-0 kubenswrapper[8244]: I0318 09:57:00.214374 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 09:57:00.242647 master-0 kubenswrapper[8244]: I0318 09:57:00.242585 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2km"] Mar 18 09:57:00.260947 master-0 kubenswrapper[8244]: I0318 09:57:00.260882 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2km"] Mar 18 09:57:01.185217 master-0 kubenswrapper[8244]: I0318 09:57:01.185086 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:01.185663 master-0 kubenswrapper[8244]: I0318 09:57:01.185226 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:01.196092 master-0 kubenswrapper[8244]: I0318 09:57:01.195573 8244 scope.go:117] "RemoveContainer" containerID="c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937" Mar 18 09:57:01.219984 master-0 kubenswrapper[8244]: I0318 09:57:01.219942 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:57:01.263645 master-0 kubenswrapper[8244]: I0318 09:57:01.263583 8244 scope.go:117] "RemoveContainer" containerID="d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b" Mar 18 09:57:01.273926 master-0 kubenswrapper[8244]: I0318 09:57:01.273896 8244 scope.go:117] "RemoveContainer" containerID="5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1" Mar 18 09:57:01.274211 master-0 kubenswrapper[8244]: E0318 09:57:01.274171 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1\": container with ID starting with 5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1 not found: ID does not exist" containerID="5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1" Mar 18 09:57:01.274263 master-0 kubenswrapper[8244]: I0318 09:57:01.274211 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1"} err="failed to get container status \"5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1\": rpc error: code = NotFound desc = could not find container \"5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1\": container with ID starting with 5e3f39d6c47db56b54c454760e9bcbc843db42e436708139221b97d0f4eca4c1 not found: ID does not exist" Mar 18 09:57:01.274263 master-0 kubenswrapper[8244]: I0318 09:57:01.274238 8244 scope.go:117] "RemoveContainer" containerID="c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937" Mar 18 09:57:01.274617 master-0 kubenswrapper[8244]: E0318 09:57:01.274576 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937\": container with ID starting with c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937 not found: ID does not exist" containerID="c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937" Mar 18 09:57:01.274666 master-0 kubenswrapper[8244]: I0318 09:57:01.274624 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937"} err="failed to get container status \"c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937\": rpc error: code = NotFound desc = could not find container \"c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937\": container with ID starting with c4f75bf7ef5ce11917c61ff84c2bd3e0d0e1e1136bc2eabc30374d4d7889b937 not found: ID does not exist" Mar 18 09:57:01.274666 master-0 kubenswrapper[8244]: I0318 09:57:01.274656 8244 scope.go:117] "RemoveContainer" containerID="d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b" Mar 18 09:57:01.275326 master-0 kubenswrapper[8244]: E0318 09:57:01.275290 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b\": container with ID starting with d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b not found: ID does not exist" containerID="d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b" Mar 18 09:57:01.275387 master-0 kubenswrapper[8244]: I0318 09:57:01.275321 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b"} err="failed to get container status \"d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b\": rpc error: code = NotFound desc = could not find container \"d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b\": container with ID starting with d6d7d5014b05d887b72ceaed1997514c215d3bab28a9845f661c8f8e5a37c61b not found: ID does not exist" Mar 18 09:57:01.342406 master-0 kubenswrapper[8244]: I0318 09:57:01.342349 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-utilities\") pod \"af588cc6-5c57-4fea-a8db-84bf34b647a3\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " Mar 18 09:57:01.342583 master-0 kubenswrapper[8244]: I0318 09:57:01.342446 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pws8f\" (UniqueName: \"kubernetes.io/projected/af588cc6-5c57-4fea-a8db-84bf34b647a3-kube-api-access-pws8f\") pod \"af588cc6-5c57-4fea-a8db-84bf34b647a3\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " Mar 18 09:57:01.342583 master-0 kubenswrapper[8244]: I0318 09:57:01.342481 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-catalog-content\") pod \"af588cc6-5c57-4fea-a8db-84bf34b647a3\" (UID: \"af588cc6-5c57-4fea-a8db-84bf34b647a3\") " Mar 18 09:57:01.344143 master-0 kubenswrapper[8244]: I0318 09:57:01.343527 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-utilities" (OuterVolumeSpecName: "utilities") pod "af588cc6-5c57-4fea-a8db-84bf34b647a3" (UID: "af588cc6-5c57-4fea-a8db-84bf34b647a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:57:01.346648 master-0 kubenswrapper[8244]: I0318 09:57:01.346598 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af588cc6-5c57-4fea-a8db-84bf34b647a3-kube-api-access-pws8f" (OuterVolumeSpecName: "kube-api-access-pws8f") pod "af588cc6-5c57-4fea-a8db-84bf34b647a3" (UID: "af588cc6-5c57-4fea-a8db-84bf34b647a3"). InnerVolumeSpecName "kube-api-access-pws8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:57:01.448637 master-0 kubenswrapper[8244]: I0318 09:57:01.448557 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pws8f\" (UniqueName: \"kubernetes.io/projected/af588cc6-5c57-4fea-a8db-84bf34b647a3-kube-api-access-pws8f\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:01.448637 master-0 kubenswrapper[8244]: I0318 09:57:01.448625 8244 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:01.462527 master-0 kubenswrapper[8244]: I0318 09:57:01.462020 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:57:01.488117 master-0 kubenswrapper[8244]: I0318 09:57:01.488058 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af588cc6-5c57-4fea-a8db-84bf34b647a3" (UID: "af588cc6-5c57-4fea-a8db-84bf34b647a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:57:01.550208 master-0 kubenswrapper[8244]: I0318 09:57:01.550122 8244 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af588cc6-5c57-4fea-a8db-84bf34b647a3-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:01.741500 master-0 kubenswrapper[8244]: I0318 09:57:01.741416 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" path="/var/lib/kubelet/pods/2a4c7d0e-10a1-44d1-8874-8e2a76753106/volumes" Mar 18 09:57:02.054743 master-0 kubenswrapper[8244]: I0318 09:57:02.054603 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn6md" Mar 18 09:57:02.055080 master-0 kubenswrapper[8244]: I0318 09:57:02.054789 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn6md" event={"ID":"af588cc6-5c57-4fea-a8db-84bf34b647a3","Type":"ContainerDied","Data":"bcaf8f561f370518d63f5758dd9df59a375ae07c11f13b0cd1da423c7b17de37"} Mar 18 09:57:02.055080 master-0 kubenswrapper[8244]: I0318 09:57:02.054961 8244 scope.go:117] "RemoveContainer" containerID="91ebdefaf6e1db7f6ba006a75e8fa665d272029e470b99c96f6f3bc993072519" Mar 18 09:57:02.064302 master-0 kubenswrapper[8244]: I0318 09:57:02.064204 8244 generic.go:334] "Generic (PLEG): container finished" podID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerID="a3003c286f247e40ae0a98b2ed04ead75ba0e59f5cc03430ca0f6f1043f83c66" exitCode=0 Mar 18 09:57:02.064302 master-0 kubenswrapper[8244]: I0318 09:57:02.064278 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lx2pt" event={"ID":"3e884f11-9ace-4ef9-930a-05e170d1bfab","Type":"ContainerDied","Data":"a3003c286f247e40ae0a98b2ed04ead75ba0e59f5cc03430ca0f6f1043f83c66"} Mar 18 09:57:02.081285 master-0 kubenswrapper[8244]: I0318 09:57:02.081223 8244 scope.go:117] "RemoveContainer" containerID="0338cdb13e96b331f60752e9956b2a4b591e432d10014af91991b3918b5996f0" Mar 18 09:57:02.083469 master-0 kubenswrapper[8244]: I0318 09:57:02.083397 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hn6md"] Mar 18 09:57:02.107500 master-0 kubenswrapper[8244]: I0318 09:57:02.107390 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hn6md"] Mar 18 09:57:02.115120 master-0 kubenswrapper[8244]: I0318 09:57:02.115060 8244 scope.go:117] "RemoveContainer" containerID="8718f426ea7c61f316713bf92f0fe2e4fac0475e6be4073f7d39f66ad5db68f7" Mar 18 09:57:02.688921 master-0 kubenswrapper[8244]: I0318 09:57:02.688818 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:02.689914 master-0 kubenswrapper[8244]: I0318 09:57:02.689862 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:02.690185 master-0 kubenswrapper[8244]: I0318 09:57:02.690158 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:57:02.691476 master-0 kubenswrapper[8244]: I0318 09:57:02.691437 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"7d07e8c06ddf9d3c29ebaf294b7a205901752e302793187eb4f8dcbb44b41fc0"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 09:57:02.691665 master-0 kubenswrapper[8244]: I0318 09:57:02.691634 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" containerID="cri-o://7d07e8c06ddf9d3c29ebaf294b7a205901752e302793187eb4f8dcbb44b41fc0" gracePeriod=30 Mar 18 09:57:03.072567 master-0 kubenswrapper[8244]: I0318 09:57:03.072445 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lx2pt" event={"ID":"3e884f11-9ace-4ef9-930a-05e170d1bfab","Type":"ContainerStarted","Data":"80df7e5c62a96d59df936be288c79791eaf170825ef363fceff2bf6b9f286dcd"} Mar 18 09:57:03.105092 master-0 kubenswrapper[8244]: I0318 09:57:03.104927 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lx2pt" podStartSLOduration=77.33782647 podStartE2EDuration="1m23.10483628s" podCreationTimestamp="2026-03-18 09:55:40 +0000 UTC" firstStartedPulling="2026-03-18 09:56:56.984082769 +0000 UTC m=+133.463818897" lastFinishedPulling="2026-03-18 09:57:02.751092549 +0000 UTC m=+139.230828707" observedRunningTime="2026-03-18 09:57:03.097757754 +0000 UTC m=+139.577493882" watchObservedRunningTime="2026-03-18 09:57:03.10483628 +0000 UTC m=+139.584572448" Mar 18 09:57:03.692000 master-0 kubenswrapper[8244]: I0318 09:57:03.691915 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:03.692000 master-0 kubenswrapper[8244]: I0318 09:57:03.691997 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:03.746520 master-0 kubenswrapper[8244]: I0318 09:57:03.746449 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" path="/var/lib/kubelet/pods/af588cc6-5c57-4fea-a8db-84bf34b647a3/volumes" Mar 18 09:57:03.923273 master-0 kubenswrapper[8244]: I0318 09:57:03.923191 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 09:57:03.949700 master-0 kubenswrapper[8244]: I0318 09:57:03.949585 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 09:57:04.462816 master-0 kubenswrapper[8244]: I0318 09:57:04.462740 8244 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:04.693098 master-0 kubenswrapper[8244]: I0318 09:57:04.693019 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:04.693098 master-0 kubenswrapper[8244]: I0318 09:57:04.693092 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:05.518045 master-0 kubenswrapper[8244]: I0318 09:57:05.517905 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:57:05.518045 master-0 kubenswrapper[8244]: I0318 09:57:05.517978 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:57:05.558657 master-0 kubenswrapper[8244]: I0318 09:57:05.558612 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:57:07.185024 master-0 kubenswrapper[8244]: I0318 09:57:07.184952 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:07.185881 master-0 kubenswrapper[8244]: I0318 09:57:07.185058 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:08.467464 master-0 kubenswrapper[8244]: E0318 09:57:08.467385 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 09:57:08.956924 master-0 kubenswrapper[8244]: I0318 09:57:08.956879 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 09:57:10.190879 master-0 kubenswrapper[8244]: I0318 09:57:10.186102 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:10.191866 master-0 kubenswrapper[8244]: I0318 09:57:10.190926 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:13.185013 master-0 kubenswrapper[8244]: I0318 09:57:13.184920 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:13.185621 master-0 kubenswrapper[8244]: I0318 09:57:13.185019 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:13.628521 master-0 kubenswrapper[8244]: I0318 09:57:13.628444 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n6d2"] Mar 18 09:57:13.631214 master-0 kubenswrapper[8244]: I0318 09:57:13.631087 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2n6d2" podUID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerName="registry-server" containerID="cri-o://49a23c8f4def9e21a7f49e230fc81a54bd2391353d84a5994b1e32887aa942a1" gracePeriod=2 Mar 18 09:57:13.631714 master-0 kubenswrapper[8244]: I0318 09:57:13.631649 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lx2pt"] Mar 18 09:57:13.632220 master-0 kubenswrapper[8244]: I0318 09:57:13.632053 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lx2pt" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerName="registry-server" containerID="cri-o://80df7e5c62a96d59df936be288c79791eaf170825ef363fceff2bf6b9f286dcd" gracePeriod=2 Mar 18 09:57:13.636908 master-0 kubenswrapper[8244]: E0318 09:57:13.635875 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="80df7e5c62a96d59df936be288c79791eaf170825ef363fceff2bf6b9f286dcd" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 09:57:13.638984 master-0 kubenswrapper[8244]: E0318 09:57:13.638890 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="80df7e5c62a96d59df936be288c79791eaf170825ef363fceff2bf6b9f286dcd" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 09:57:13.640813 master-0 kubenswrapper[8244]: E0318 09:57:13.640680 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="80df7e5c62a96d59df936be288c79791eaf170825ef363fceff2bf6b9f286dcd" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 09:57:13.640813 master-0 kubenswrapper[8244]: E0318 09:57:13.640809 8244 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-lx2pt" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerName="registry-server" Mar 18 09:57:14.145810 master-0 kubenswrapper[8244]: I0318 09:57:14.145756 8244 generic.go:334] "Generic (PLEG): container finished" podID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerID="80df7e5c62a96d59df936be288c79791eaf170825ef363fceff2bf6b9f286dcd" exitCode=0 Mar 18 09:57:14.145810 master-0 kubenswrapper[8244]: I0318 09:57:14.145844 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lx2pt" event={"ID":"3e884f11-9ace-4ef9-930a-05e170d1bfab","Type":"ContainerDied","Data":"80df7e5c62a96d59df936be288c79791eaf170825ef363fceff2bf6b9f286dcd"} Mar 18 09:57:14.148945 master-0 kubenswrapper[8244]: I0318 09:57:14.148908 8244 generic.go:334] "Generic (PLEG): container finished" podID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerID="49a23c8f4def9e21a7f49e230fc81a54bd2391353d84a5994b1e32887aa942a1" exitCode=0 Mar 18 09:57:14.148945 master-0 kubenswrapper[8244]: I0318 09:57:14.148941 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n6d2" event={"ID":"305c97a4-eb1b-4104-b9ba-2603229899b0","Type":"ContainerDied","Data":"49a23c8f4def9e21a7f49e230fc81a54bd2391353d84a5994b1e32887aa942a1"} Mar 18 09:57:14.219381 master-0 kubenswrapper[8244]: I0318 09:57:14.219260 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:57:14.227742 master-0 kubenswrapper[8244]: I0318 09:57:14.227686 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:57:14.243922 master-0 kubenswrapper[8244]: I0318 09:57:14.243790 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6fq6\" (UniqueName: \"kubernetes.io/projected/305c97a4-eb1b-4104-b9ba-2603229899b0-kube-api-access-c6fq6\") pod \"305c97a4-eb1b-4104-b9ba-2603229899b0\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " Mar 18 09:57:14.243922 master-0 kubenswrapper[8244]: I0318 09:57:14.243896 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-catalog-content\") pod \"305c97a4-eb1b-4104-b9ba-2603229899b0\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " Mar 18 09:57:14.243922 master-0 kubenswrapper[8244]: I0318 09:57:14.243922 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-utilities\") pod \"305c97a4-eb1b-4104-b9ba-2603229899b0\" (UID: \"305c97a4-eb1b-4104-b9ba-2603229899b0\") " Mar 18 09:57:14.245014 master-0 kubenswrapper[8244]: I0318 09:57:14.244970 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-utilities" (OuterVolumeSpecName: "utilities") pod "305c97a4-eb1b-4104-b9ba-2603229899b0" (UID: "305c97a4-eb1b-4104-b9ba-2603229899b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:57:14.249348 master-0 kubenswrapper[8244]: I0318 09:57:14.249281 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305c97a4-eb1b-4104-b9ba-2603229899b0-kube-api-access-c6fq6" (OuterVolumeSpecName: "kube-api-access-c6fq6") pod "305c97a4-eb1b-4104-b9ba-2603229899b0" (UID: "305c97a4-eb1b-4104-b9ba-2603229899b0"). InnerVolumeSpecName "kube-api-access-c6fq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:57:14.332999 master-0 kubenswrapper[8244]: I0318 09:57:14.331732 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "305c97a4-eb1b-4104-b9ba-2603229899b0" (UID: "305c97a4-eb1b-4104-b9ba-2603229899b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:57:14.345201 master-0 kubenswrapper[8244]: I0318 09:57:14.345083 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-utilities\") pod \"3e884f11-9ace-4ef9-930a-05e170d1bfab\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " Mar 18 09:57:14.345398 master-0 kubenswrapper[8244]: I0318 09:57:14.345369 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-catalog-content\") pod \"3e884f11-9ace-4ef9-930a-05e170d1bfab\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " Mar 18 09:57:14.345481 master-0 kubenswrapper[8244]: I0318 09:57:14.345412 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-958k6\" (UniqueName: \"kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6\") pod \"3e884f11-9ace-4ef9-930a-05e170d1bfab\" (UID: \"3e884f11-9ace-4ef9-930a-05e170d1bfab\") " Mar 18 09:57:14.345670 master-0 kubenswrapper[8244]: I0318 09:57:14.345630 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6fq6\" (UniqueName: \"kubernetes.io/projected/305c97a4-eb1b-4104-b9ba-2603229899b0-kube-api-access-c6fq6\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:14.345670 master-0 kubenswrapper[8244]: I0318 09:57:14.345652 8244 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:14.345670 master-0 kubenswrapper[8244]: I0318 09:57:14.345666 8244 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305c97a4-eb1b-4104-b9ba-2603229899b0-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:14.346671 master-0 kubenswrapper[8244]: I0318 09:57:14.346600 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-utilities" (OuterVolumeSpecName: "utilities") pod "3e884f11-9ace-4ef9-930a-05e170d1bfab" (UID: "3e884f11-9ace-4ef9-930a-05e170d1bfab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:57:14.348729 master-0 kubenswrapper[8244]: I0318 09:57:14.348683 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6" (OuterVolumeSpecName: "kube-api-access-958k6") pod "3e884f11-9ace-4ef9-930a-05e170d1bfab" (UID: "3e884f11-9ace-4ef9-930a-05e170d1bfab"). InnerVolumeSpecName "kube-api-access-958k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:57:14.413576 master-0 kubenswrapper[8244]: I0318 09:57:14.413466 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e884f11-9ace-4ef9-930a-05e170d1bfab" (UID: "3e884f11-9ace-4ef9-930a-05e170d1bfab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:57:14.446790 master-0 kubenswrapper[8244]: I0318 09:57:14.446721 8244 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:14.446790 master-0 kubenswrapper[8244]: I0318 09:57:14.446781 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-958k6\" (UniqueName: \"kubernetes.io/projected/3e884f11-9ace-4ef9-930a-05e170d1bfab-kube-api-access-958k6\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:14.447047 master-0 kubenswrapper[8244]: I0318 09:57:14.446812 8244 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e884f11-9ace-4ef9-930a-05e170d1bfab-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:14.461650 master-0 kubenswrapper[8244]: I0318 09:57:14.461571 8244 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:15.160855 master-0 kubenswrapper[8244]: I0318 09:57:15.157289 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lx2pt" event={"ID":"3e884f11-9ace-4ef9-930a-05e170d1bfab","Type":"ContainerDied","Data":"92ea30e6b1acf0370980e9217d92b6832726a8cf9403f31798128c84642185d7"} Mar 18 09:57:15.160855 master-0 kubenswrapper[8244]: I0318 09:57:15.157340 8244 scope.go:117] "RemoveContainer" containerID="80df7e5c62a96d59df936be288c79791eaf170825ef363fceff2bf6b9f286dcd" Mar 18 09:57:15.160855 master-0 kubenswrapper[8244]: I0318 09:57:15.157445 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lx2pt" Mar 18 09:57:15.183661 master-0 kubenswrapper[8244]: I0318 09:57:15.183567 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n6d2" event={"ID":"305c97a4-eb1b-4104-b9ba-2603229899b0","Type":"ContainerDied","Data":"e851101b44a79cab31320a525983c7e460dfb515d195e81afefdaabb52603f4f"} Mar 18 09:57:15.183856 master-0 kubenswrapper[8244]: I0318 09:57:15.183688 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n6d2" Mar 18 09:57:15.201965 master-0 kubenswrapper[8244]: I0318 09:57:15.200912 8244 scope.go:117] "RemoveContainer" containerID="a3003c286f247e40ae0a98b2ed04ead75ba0e59f5cc03430ca0f6f1043f83c66" Mar 18 09:57:15.202578 master-0 kubenswrapper[8244]: I0318 09:57:15.202560 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lx2pt"] Mar 18 09:57:15.212380 master-0 kubenswrapper[8244]: I0318 09:57:15.212314 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lx2pt"] Mar 18 09:57:15.217253 master-0 kubenswrapper[8244]: I0318 09:57:15.217225 8244 scope.go:117] "RemoveContainer" containerID="1cbdc4e76d1d07790af02f84bca996c202797edf9bebfc3cedebf4576f85e31c" Mar 18 09:57:15.221101 master-0 kubenswrapper[8244]: I0318 09:57:15.221054 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n6d2"] Mar 18 09:57:15.229741 master-0 kubenswrapper[8244]: I0318 09:57:15.229687 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2n6d2"] Mar 18 09:57:15.231117 master-0 kubenswrapper[8244]: I0318 09:57:15.231085 8244 scope.go:117] "RemoveContainer" containerID="49a23c8f4def9e21a7f49e230fc81a54bd2391353d84a5994b1e32887aa942a1" Mar 18 09:57:15.251695 master-0 kubenswrapper[8244]: I0318 09:57:15.251654 8244 scope.go:117] "RemoveContainer" containerID="a97c2824af4a8942386c440e962d66b8577475834e78172714b5d24decf0108e" Mar 18 09:57:15.263938 master-0 kubenswrapper[8244]: I0318 09:57:15.263906 8244 scope.go:117] "RemoveContainer" containerID="03e6fba4231fcdda92e3fad96e79a4e5f2aa602c65c22bc627f57140c57092f0" Mar 18 09:57:15.739114 master-0 kubenswrapper[8244]: I0318 09:57:15.739046 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305c97a4-eb1b-4104-b9ba-2603229899b0" path="/var/lib/kubelet/pods/305c97a4-eb1b-4104-b9ba-2603229899b0/volumes" Mar 18 09:57:15.739645 master-0 kubenswrapper[8244]: I0318 09:57:15.739603 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" path="/var/lib/kubelet/pods/3e884f11-9ace-4ef9-930a-05e170d1bfab/volumes" Mar 18 09:57:16.185850 master-0 kubenswrapper[8244]: I0318 09:57:16.185742 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:16.186158 master-0 kubenswrapper[8244]: I0318 09:57:16.185854 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:16.523016 master-0 kubenswrapper[8244]: E0318 09:57:16.522963 8244 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaccc57fb_75f5_4f89_9804_6ede7f77e27c.slice/crio-206825c3b2d516109311b9ec6547c75a5e9979c7b55c567cf556284de0799148.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaccc57fb_75f5_4f89_9804_6ede7f77e27c.slice/crio-conmon-206825c3b2d516109311b9ec6547c75a5e9979c7b55c567cf556284de0799148.scope\": RecentStats: unable to find data in memory cache]" Mar 18 09:57:17.207859 master-0 kubenswrapper[8244]: I0318 09:57:17.207744 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/0.log" Mar 18 09:57:17.207859 master-0 kubenswrapper[8244]: I0318 09:57:17.207804 8244 generic.go:334] "Generic (PLEG): container finished" podID="accc57fb-75f5-4f89-9804-6ede7f77e27c" containerID="206825c3b2d516109311b9ec6547c75a5e9979c7b55c567cf556284de0799148" exitCode=1 Mar 18 09:57:17.208078 master-0 kubenswrapper[8244]: I0318 09:57:17.207875 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerDied","Data":"206825c3b2d516109311b9ec6547c75a5e9979c7b55c567cf556284de0799148"} Mar 18 09:57:17.210347 master-0 kubenswrapper[8244]: I0318 09:57:17.209965 8244 scope.go:117] "RemoveContainer" containerID="206825c3b2d516109311b9ec6547c75a5e9979c7b55c567cf556284de0799148" Mar 18 09:57:18.215491 master-0 kubenswrapper[8244]: I0318 09:57:18.215384 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/0.log" Mar 18 09:57:18.215491 master-0 kubenswrapper[8244]: I0318 09:57:18.215472 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerStarted","Data":"8be1e41fb91899198366216500a2564664d7ef8ef90cbe9f4c1e19358a42df09"} Mar 18 09:57:19.186337 master-0 kubenswrapper[8244]: I0318 09:57:19.186241 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:19.186601 master-0 kubenswrapper[8244]: I0318 09:57:19.186340 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:22.120300 master-0 kubenswrapper[8244]: E0318 09:57:22.120238 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 09:57:22.185654 master-0 kubenswrapper[8244]: I0318 09:57:22.185582 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:22.186115 master-0 kubenswrapper[8244]: I0318 09:57:22.186082 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:24.462343 master-0 kubenswrapper[8244]: I0318 09:57:24.462147 8244 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:24.462343 master-0 kubenswrapper[8244]: I0318 09:57:24.462284 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:57:24.463102 master-0 kubenswrapper[8244]: I0318 09:57:24.463063 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"591676dbc66d002b41b1524c76dbc54235b7dd32e488240aec01e853c0930dc0"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 09:57:24.463205 master-0 kubenswrapper[8244]: I0318 09:57:24.463165 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://591676dbc66d002b41b1524c76dbc54235b7dd32e488240aec01e853c0930dc0" gracePeriod=30 Mar 18 09:57:25.185778 master-0 kubenswrapper[8244]: I0318 09:57:25.185689 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:25.185778 master-0 kubenswrapper[8244]: I0318 09:57:25.185775 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:25.262578 master-0 kubenswrapper[8244]: I0318 09:57:25.262540 8244 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="591676dbc66d002b41b1524c76dbc54235b7dd32e488240aec01e853c0930dc0" exitCode=2 Mar 18 09:57:25.262708 master-0 kubenswrapper[8244]: I0318 09:57:25.262590 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"591676dbc66d002b41b1524c76dbc54235b7dd32e488240aec01e853c0930dc0"} Mar 18 09:57:25.262708 master-0 kubenswrapper[8244]: I0318 09:57:25.262629 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"459fcfb70fb899949af51fd621c6c7e3b1b5510c468c992c115b7f0303ef5eb8"} Mar 18 09:57:25.262708 master-0 kubenswrapper[8244]: I0318 09:57:25.262655 8244 scope.go:117] "RemoveContainer" containerID="25e8b4ad00ce2bdd7986e5a3dbebb908681f21787c999f9ac28c5b382c85fc69" Mar 18 09:57:27.332621 master-0 kubenswrapper[8244]: I0318 09:57:27.332529 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:57:28.186040 master-0 kubenswrapper[8244]: I0318 09:57:28.185564 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:57:28.186040 master-0 kubenswrapper[8244]: I0318 09:57:28.185673 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:57:29.598881 master-0 kubenswrapper[8244]: I0318 09:57:29.597500 8244 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-4tlnm container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": read tcp 10.128.0.2:55444->10.128.0.7:8443: read: connection reset by peer" start-of-body= Mar 18 09:57:29.598881 master-0 kubenswrapper[8244]: I0318 09:57:29.597577 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" podUID="a078565a-6970-4f42-84f4-938f1d637245" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": read tcp 10.128.0.2:55444->10.128.0.7:8443: read: connection reset by peer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.673733 8244 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.673801 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.673977 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.673988 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.673995 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8bd84c-8035-4bec-a725-b0ae89382c0f" containerName="installer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674001 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8bd84c-8035-4bec-a725-b0ae89382c0f" containerName="installer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674009 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674016 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674027 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerName="extract-utilities" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674033 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerName="extract-utilities" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674040 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb70bf3-93cd-4000-be1a-8e21846d5709" containerName="installer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674046 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb70bf3-93cd-4000-be1a-8e21846d5709" containerName="installer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674054 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00" containerName="installer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674060 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00" containerName="installer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674069 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerName="extract-content" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674075 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerName="extract-content" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674084 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerName="extract-content" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674090 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerName="extract-content" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674099 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674104 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674114 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674120 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674131 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerName="extract-utilities" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674136 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerName="extract-utilities" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674143 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674149 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674157 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerName="extract-content" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674163 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerName="extract-content" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674171 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerName="extract-utilities" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674176 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerName="extract-utilities" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674185 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerName="extract-utilities" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674191 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerName="extract-utilities" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674197 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674203 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: E0318 09:57:29.674212 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerName="extract-content" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674218 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerName="extract-content" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674288 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af588cc6-5c57-4fea-a8db-84bf34b647a3" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674301 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e884f11-9ace-4ef9-930a-05e170d1bfab" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674309 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674321 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4c7d0e-10a1-44d1-8874-8e2a76753106" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674330 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="305c97a4-eb1b-4104-b9ba-2603229899b0" containerName="registry-server" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674337 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8bd84c-8035-4bec-a725-b0ae89382c0f" containerName="installer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674345 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4cd1d6-3c2a-42bb-a469-5e7dc2d5ba00" containerName="installer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674354 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb70bf3-93cd-4000-be1a-8e21846d5709" containerName="installer" Mar 18 09:57:29.676307 master-0 kubenswrapper[8244]: I0318 09:57:29.674509 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:57:29.689362 master-0 kubenswrapper[8244]: I0318 09:57:29.676478 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" containerID="cri-o://614bad60cc203e379c2219ece0e463fc923ffaef207f86d7d7dbe59e9131f846" gracePeriod=30 Mar 18 09:57:29.689362 master-0 kubenswrapper[8244]: I0318 09:57:29.677358 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:57:29.744995 master-0 kubenswrapper[8244]: I0318 09:57:29.744942 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:57:29.744995 master-0 kubenswrapper[8244]: I0318 09:57:29.744989 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:57:29.846888 master-0 kubenswrapper[8244]: I0318 09:57:29.846749 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:57:29.847041 master-0 kubenswrapper[8244]: I0318 09:57:29.846953 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:57:29.847110 master-0 kubenswrapper[8244]: I0318 09:57:29.847063 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:57:29.847165 master-0 kubenswrapper[8244]: I0318 09:57:29.847143 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:57:30.185371 master-0 kubenswrapper[8244]: I0318 09:57:30.185200 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 09:57:30.185371 master-0 kubenswrapper[8244]: I0318 09:57:30.185254 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 09:57:30.297036 master-0 kubenswrapper[8244]: I0318 09:57:30.296980 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-g25jq_3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/openshift-controller-manager-operator/1.log" Mar 18 09:57:30.298267 master-0 kubenswrapper[8244]: I0318 09:57:30.298216 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-g25jq_3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/openshift-controller-manager-operator/0.log" Mar 18 09:57:30.298357 master-0 kubenswrapper[8244]: I0318 09:57:30.298316 8244 generic.go:334] "Generic (PLEG): container finished" podID="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" containerID="2795ecc70fe66ee4a0f920912ba6641b4460a6d001aedb4e015ff801933a203d" exitCode=255 Mar 18 09:57:30.298443 master-0 kubenswrapper[8244]: I0318 09:57:30.298413 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" event={"ID":"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4","Type":"ContainerDied","Data":"2795ecc70fe66ee4a0f920912ba6641b4460a6d001aedb4e015ff801933a203d"} Mar 18 09:57:30.298512 master-0 kubenswrapper[8244]: I0318 09:57:30.298467 8244 scope.go:117] "RemoveContainer" containerID="c8f91dc57ea6bc611089a31345d27ad1b6b311c14621b5aebef7b7aac62f0940" Mar 18 09:57:30.299268 master-0 kubenswrapper[8244]: I0318 09:57:30.299127 8244 scope.go:117] "RemoveContainer" containerID="2795ecc70fe66ee4a0f920912ba6641b4460a6d001aedb4e015ff801933a203d" Mar 18 09:57:30.299490 master-0 kubenswrapper[8244]: E0318 09:57:30.299449 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8c94f4649-g25jq_openshift-controller-manager-operator(3646e0cd-49c9-4a98-a2e3-efe9359cc6c4)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" podUID="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" Mar 18 09:57:30.303466 master-0 kubenswrapper[8244]: I0318 09:57:30.303379 8244 generic.go:334] "Generic (PLEG): container finished" podID="54a208d1-afe8-49b5-92e0-e27afb4abc80" containerID="d65f913e3d46ba5408795bb9c468d0294b6c4c00a07a18a41204ec7233a6d96b" exitCode=0 Mar 18 09:57:30.303556 master-0 kubenswrapper[8244]: I0318 09:57:30.303459 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"54a208d1-afe8-49b5-92e0-e27afb4abc80","Type":"ContainerDied","Data":"d65f913e3d46ba5408795bb9c468d0294b6c4c00a07a18a41204ec7233a6d96b"} Mar 18 09:57:30.308065 master-0 kubenswrapper[8244]: I0318 09:57:30.308003 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-495pg_0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/openshift-config-operator/1.log" Mar 18 09:57:30.309629 master-0 kubenswrapper[8244]: I0318 09:57:30.309568 8244 generic.go:334] "Generic (PLEG): container finished" podID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerID="7d07e8c06ddf9d3c29ebaf294b7a205901752e302793187eb4f8dcbb44b41fc0" exitCode=255 Mar 18 09:57:30.309749 master-0 kubenswrapper[8244]: I0318 09:57:30.309686 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerDied","Data":"7d07e8c06ddf9d3c29ebaf294b7a205901752e302793187eb4f8dcbb44b41fc0"} Mar 18 09:57:30.314218 master-0 kubenswrapper[8244]: I0318 09:57:30.314152 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vj8tt_3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/kube-scheduler-operator-container/1.log" Mar 18 09:57:30.314800 master-0 kubenswrapper[8244]: I0318 09:57:30.314747 8244 generic.go:334] "Generic (PLEG): container finished" podID="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" containerID="ece038fe79c27be1029079683dfa33a1fa90e9515d0fac47aae2ee51f3ffd2df" exitCode=255 Mar 18 09:57:30.314943 master-0 kubenswrapper[8244]: I0318 09:57:30.314879 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" event={"ID":"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6","Type":"ContainerDied","Data":"ece038fe79c27be1029079683dfa33a1fa90e9515d0fac47aae2ee51f3ffd2df"} Mar 18 09:57:30.315704 master-0 kubenswrapper[8244]: I0318 09:57:30.315657 8244 scope.go:117] "RemoveContainer" containerID="ece038fe79c27be1029079683dfa33a1fa90e9515d0fac47aae2ee51f3ffd2df" Mar 18 09:57:30.317175 master-0 kubenswrapper[8244]: I0318 09:57:30.316886 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-lk698_ec53d7fa-445b-4e1d-84ef-545f08e80ccc/kube-storage-version-migrator-operator/1.log" Mar 18 09:57:30.317175 master-0 kubenswrapper[8244]: E0318 09:57:30.317006 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-dddff6458-vj8tt_openshift-kube-scheduler-operator(3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" podUID="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" Mar 18 09:57:30.318175 master-0 kubenswrapper[8244]: I0318 09:57:30.318139 8244 generic.go:334] "Generic (PLEG): container finished" podID="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" containerID="100b826fb47409f3adda82931968130591dc6b1e7420f5ccfd2ef57c6281504c" exitCode=255 Mar 18 09:57:30.318268 master-0 kubenswrapper[8244]: I0318 09:57:30.318181 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" event={"ID":"ec53d7fa-445b-4e1d-84ef-545f08e80ccc","Type":"ContainerDied","Data":"100b826fb47409f3adda82931968130591dc6b1e7420f5ccfd2ef57c6281504c"} Mar 18 09:57:30.318803 master-0 kubenswrapper[8244]: I0318 09:57:30.318752 8244 scope.go:117] "RemoveContainer" containerID="100b826fb47409f3adda82931968130591dc6b1e7420f5ccfd2ef57c6281504c" Mar 18 09:57:30.319355 master-0 kubenswrapper[8244]: E0318 09:57:30.319294 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-6bb5bfb6fd-lk698_openshift-kube-storage-version-migrator-operator(ec53d7fa-445b-4e1d-84ef-545f08e80ccc)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" podUID="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" Mar 18 09:57:30.322545 master-0 kubenswrapper[8244]: I0318 09:57:30.322499 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-8srnz_9ccdc221-4ec5-487e-8ec4-85284ed628d8/network-operator/1.log" Mar 18 09:57:30.323337 master-0 kubenswrapper[8244]: I0318 09:57:30.323295 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-8srnz_9ccdc221-4ec5-487e-8ec4-85284ed628d8/network-operator/0.log" Mar 18 09:57:30.323420 master-0 kubenswrapper[8244]: I0318 09:57:30.323354 8244 generic.go:334] "Generic (PLEG): container finished" podID="9ccdc221-4ec5-487e-8ec4-85284ed628d8" containerID="b5bf205c4d2d39a65c5f434aca2db07e6f6c44b756c420c12726c015f7a4b2e6" exitCode=255 Mar 18 09:57:30.323462 master-0 kubenswrapper[8244]: I0318 09:57:30.323428 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" event={"ID":"9ccdc221-4ec5-487e-8ec4-85284ed628d8","Type":"ContainerDied","Data":"b5bf205c4d2d39a65c5f434aca2db07e6f6c44b756c420c12726c015f7a4b2e6"} Mar 18 09:57:30.324397 master-0 kubenswrapper[8244]: I0318 09:57:30.324355 8244 scope.go:117] "RemoveContainer" containerID="b5bf205c4d2d39a65c5f434aca2db07e6f6c44b756c420c12726c015f7a4b2e6" Mar 18 09:57:30.324689 master-0 kubenswrapper[8244]: E0318 09:57:30.324628 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=network-operator pod=network-operator-7bd846bfc4-8srnz_openshift-network-operator(9ccdc221-4ec5-487e-8ec4-85284ed628d8)\"" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" podUID="9ccdc221-4ec5-487e-8ec4-85284ed628d8" Mar 18 09:57:30.326211 master-0 kubenswrapper[8244]: I0318 09:57:30.326156 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-zz68c_0d72e695-0183-4ee8-8add-5425e67f7138/openshift-apiserver-operator/1.log" Mar 18 09:57:30.328838 master-0 kubenswrapper[8244]: I0318 09:57:30.327476 8244 generic.go:334] "Generic (PLEG): container finished" podID="0d72e695-0183-4ee8-8add-5425e67f7138" containerID="d7fed381f588321bf949c1ee4979e243946541c605dea6e2da6f26ae56dbca2b" exitCode=255 Mar 18 09:57:30.328838 master-0 kubenswrapper[8244]: I0318 09:57:30.327557 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" event={"ID":"0d72e695-0183-4ee8-8add-5425e67f7138","Type":"ContainerDied","Data":"d7fed381f588321bf949c1ee4979e243946541c605dea6e2da6f26ae56dbca2b"} Mar 18 09:57:30.329795 master-0 kubenswrapper[8244]: I0318 09:57:30.329323 8244 scope.go:117] "RemoveContainer" containerID="d7fed381f588321bf949c1ee4979e243946541c605dea6e2da6f26ae56dbca2b" Mar 18 09:57:30.330439 master-0 kubenswrapper[8244]: E0318 09:57:30.330378 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-d65958b8-zz68c_openshift-apiserver-operator(0d72e695-0183-4ee8-8add-5425e67f7138)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" podUID="0d72e695-0183-4ee8-8add-5425e67f7138" Mar 18 09:57:30.331417 master-0 kubenswrapper[8244]: I0318 09:57:30.330701 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-pzqqc_0999f781-3299-4cb6-ba76-2a4f4584c685/kube-controller-manager-operator/1.log" Mar 18 09:57:30.331672 master-0 kubenswrapper[8244]: I0318 09:57:30.331609 8244 generic.go:334] "Generic (PLEG): container finished" podID="0999f781-3299-4cb6-ba76-2a4f4584c685" containerID="bd5fe04a9ede0b84f18ed45bdc7555eb6593622c877cdf75babe4d3ead617eed" exitCode=255 Mar 18 09:57:30.331672 master-0 kubenswrapper[8244]: I0318 09:57:30.331648 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" event={"ID":"0999f781-3299-4cb6-ba76-2a4f4584c685","Type":"ContainerDied","Data":"bd5fe04a9ede0b84f18ed45bdc7555eb6593622c877cdf75babe4d3ead617eed"} Mar 18 09:57:30.333316 master-0 kubenswrapper[8244]: I0318 09:57:30.332384 8244 scope.go:117] "RemoveContainer" containerID="bd5fe04a9ede0b84f18ed45bdc7555eb6593622c877cdf75babe4d3ead617eed" Mar 18 09:57:30.333316 master-0 kubenswrapper[8244]: E0318 09:57:30.332747 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-ff989d6cc-pzqqc_openshift-kube-controller-manager-operator(0999f781-3299-4cb6-ba76-2a4f4584c685)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" podUID="0999f781-3299-4cb6-ba76-2a4f4584c685" Mar 18 09:57:30.335244 master-0 kubenswrapper[8244]: I0318 09:57:30.335187 8244 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="614bad60cc203e379c2219ece0e463fc923ffaef207f86d7d7dbe59e9131f846" exitCode=0 Mar 18 09:57:30.335404 master-0 kubenswrapper[8244]: I0318 09:57:30.335372 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ad9370766ae18aa384f3b2f07e9d3cada2bbe156f6bcba4f02016b49f4e713f" Mar 18 09:57:30.344488 master-0 kubenswrapper[8244]: I0318 09:57:30.344394 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-smghb_6a6a616d-012a-479e-ab3d-b21295ea1805/kube-apiserver-operator/1.log" Mar 18 09:57:30.345191 master-0 kubenswrapper[8244]: I0318 09:57:30.345141 8244 generic.go:334] "Generic (PLEG): container finished" podID="6a6a616d-012a-479e-ab3d-b21295ea1805" containerID="81cd35f002f1f429688cbe007f6618850051907823664181496568b308ab47bb" exitCode=255 Mar 18 09:57:30.345277 master-0 kubenswrapper[8244]: I0318 09:57:30.345232 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" event={"ID":"6a6a616d-012a-479e-ab3d-b21295ea1805","Type":"ContainerDied","Data":"81cd35f002f1f429688cbe007f6618850051907823664181496568b308ab47bb"} Mar 18 09:57:30.345883 master-0 kubenswrapper[8244]: I0318 09:57:30.345815 8244 scope.go:117] "RemoveContainer" containerID="81cd35f002f1f429688cbe007f6618850051907823664181496568b308ab47bb" Mar 18 09:57:30.346226 master-0 kubenswrapper[8244]: E0318 09:57:30.346178 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-8b68b9d9b-smghb_openshift-kube-apiserver-operator(6a6a616d-012a-479e-ab3d-b21295ea1805)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" podUID="6a6a616d-012a-479e-ab3d-b21295ea1805" Mar 18 09:57:30.353174 master-0 kubenswrapper[8244]: I0318 09:57:30.353095 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/1.log" Mar 18 09:57:30.361778 master-0 kubenswrapper[8244]: I0318 09:57:30.359150 8244 generic.go:334] "Generic (PLEG): container finished" podID="a078565a-6970-4f42-84f4-938f1d637245" containerID="ff998e161f24e27e62ffb41d5f1af2c4149f9709b9260bb197fe3f8937665152" exitCode=255 Mar 18 09:57:30.361778 master-0 kubenswrapper[8244]: I0318 09:57:30.359699 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" event={"ID":"a078565a-6970-4f42-84f4-938f1d637245","Type":"ContainerDied","Data":"ff998e161f24e27e62ffb41d5f1af2c4149f9709b9260bb197fe3f8937665152"} Mar 18 09:57:30.364683 master-0 kubenswrapper[8244]: I0318 09:57:30.364542 8244 scope.go:117] "RemoveContainer" containerID="ff998e161f24e27e62ffb41d5f1af2c4149f9709b9260bb197fe3f8937665152" Mar 18 09:57:30.366128 master-0 kubenswrapper[8244]: I0318 09:57:30.366081 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-4q9tr_f076eaf0-b041-4db0-ba06-3d85e23bb654/authentication-operator/1.log" Mar 18 09:57:30.366851 master-0 kubenswrapper[8244]: I0318 09:57:30.366792 8244 generic.go:334] "Generic (PLEG): container finished" podID="f076eaf0-b041-4db0-ba06-3d85e23bb654" containerID="7899027579e9cd9f7fcc12484390d733833facf13d02a5193e75c23ee942e285" exitCode=255 Mar 18 09:57:30.366918 master-0 kubenswrapper[8244]: I0318 09:57:30.366873 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" event={"ID":"f076eaf0-b041-4db0-ba06-3d85e23bb654","Type":"ContainerDied","Data":"7899027579e9cd9f7fcc12484390d733833facf13d02a5193e75c23ee942e285"} Mar 18 09:57:30.368074 master-0 kubenswrapper[8244]: I0318 09:57:30.368038 8244 scope.go:117] "RemoveContainer" containerID="7899027579e9cd9f7fcc12484390d733833facf13d02a5193e75c23ee942e285" Mar 18 09:57:30.369367 master-0 kubenswrapper[8244]: I0318 09:57:30.369346 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-pgtbr_bb35841e-d992-4044-aaaa-06c9faf47bd0/service-ca-operator/1.log" Mar 18 09:57:30.370036 master-0 kubenswrapper[8244]: I0318 09:57:30.369984 8244 generic.go:334] "Generic (PLEG): container finished" podID="bb35841e-d992-4044-aaaa-06c9faf47bd0" containerID="76f59e21155c1d71669d55451f86d8b5a3fe790b476c844c6bc57c22a2e68f76" exitCode=255 Mar 18 09:57:30.370101 master-0 kubenswrapper[8244]: I0318 09:57:30.370062 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" event={"ID":"bb35841e-d992-4044-aaaa-06c9faf47bd0","Type":"ContainerDied","Data":"76f59e21155c1d71669d55451f86d8b5a3fe790b476c844c6bc57c22a2e68f76"} Mar 18 09:57:30.371065 master-0 kubenswrapper[8244]: E0318 09:57:30.370989 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd-operator pod=etcd-operator-8544cbcf9c-4tlnm_openshift-etcd-operator(a078565a-6970-4f42-84f4-938f1d637245)\"" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" podUID="a078565a-6970-4f42-84f4-938f1d637245" Mar 18 09:57:30.372395 master-0 kubenswrapper[8244]: E0318 09:57:30.372330 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=authentication-operator pod=authentication-operator-5885bfd7f4-4q9tr_openshift-authentication-operator(f076eaf0-b041-4db0-ba06-3d85e23bb654)\"" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" podUID="f076eaf0-b041-4db0-ba06-3d85e23bb654" Mar 18 09:57:30.372701 master-0 kubenswrapper[8244]: I0318 09:57:30.372652 8244 scope.go:117] "RemoveContainer" containerID="76f59e21155c1d71669d55451f86d8b5a3fe790b476c844c6bc57c22a2e68f76" Mar 18 09:57:30.373162 master-0 kubenswrapper[8244]: E0318 09:57:30.373106 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-operator pod=service-ca-operator-b865698dc-pgtbr_openshift-service-ca-operator(bb35841e-d992-4044-aaaa-06c9faf47bd0)\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" podUID="bb35841e-d992-4044-aaaa-06c9faf47bd0" Mar 18 09:57:30.468860 master-0 kubenswrapper[8244]: I0318 09:57:30.468812 8244 scope.go:117] "RemoveContainer" containerID="fe475c93acb3e152a06334aa122f61bc3dfe0a7c617c3c6b5b5bc407433dfd76" Mar 18 09:57:30.502632 master-0 kubenswrapper[8244]: I0318 09:57:30.502527 8244 scope.go:117] "RemoveContainer" containerID="da02ee0de03a088a8c40f809ca8f007d6167a1c499d12f1066049752159499b0" Mar 18 09:57:30.523648 master-0 kubenswrapper[8244]: I0318 09:57:30.523586 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:57:30.550883 master-0 kubenswrapper[8244]: I0318 09:57:30.550840 8244 scope.go:117] "RemoveContainer" containerID="5852b37c5e8c94f0baa4c4a1981174d60f6d9f69d3672da3d78ad25102d900a1" Mar 18 09:57:30.567874 master-0 kubenswrapper[8244]: I0318 09:57:30.566750 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 09:57:30.567874 master-0 kubenswrapper[8244]: I0318 09:57:30.566811 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 09:57:30.567874 master-0 kubenswrapper[8244]: I0318 09:57:30.566916 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets" (OuterVolumeSpecName: "secrets") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:30.567874 master-0 kubenswrapper[8244]: I0318 09:57:30.567075 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs" (OuterVolumeSpecName: "logs") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:30.567874 master-0 kubenswrapper[8244]: I0318 09:57:30.567082 8244 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:30.578166 master-0 kubenswrapper[8244]: I0318 09:57:30.578139 8244 scope.go:117] "RemoveContainer" containerID="809e75633cdef66e6f08501f6041dd63595d2c3bfee4b8663f566a1c8682596e" Mar 18 09:57:30.609174 master-0 kubenswrapper[8244]: I0318 09:57:30.608737 8244 scope.go:117] "RemoveContainer" containerID="756a2f4fb3414c500a82e436fbad8cd30da785b7959d7459fc20c6af350a8060" Mar 18 09:57:30.625879 master-0 kubenswrapper[8244]: I0318 09:57:30.625809 8244 scope.go:117] "RemoveContainer" containerID="e5c331496115ef5ceb50ea93103ae754d1d16032e25eefad5a38ee8ba0e6ac68" Mar 18 09:57:30.643560 master-0 kubenswrapper[8244]: I0318 09:57:30.643443 8244 scope.go:117] "RemoveContainer" containerID="5230f2c731392582b4c5b7f1d1739dca596269f4bff091decf0daf9fa0a42c23" Mar 18 09:57:30.655942 master-0 kubenswrapper[8244]: I0318 09:57:30.655923 8244 scope.go:117] "RemoveContainer" containerID="baecef73d93e3ca9ff934b2e1c379d4ea8c4c91e3cae11e23b740ee52145d967" Mar 18 09:57:30.668277 master-0 kubenswrapper[8244]: I0318 09:57:30.668239 8244 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:30.673934 master-0 kubenswrapper[8244]: I0318 09:57:30.673879 8244 scope.go:117] "RemoveContainer" containerID="035a83745bfe6ed219f87a31bd7766c9d9b162354f5f4e36d6dc8a6cc1dbc053" Mar 18 09:57:30.689723 master-0 kubenswrapper[8244]: I0318 09:57:30.689655 8244 scope.go:117] "RemoveContainer" containerID="86e19dd48a4220e684cd4591a7ea73d2539f388a0f50f6f6c55feee37bcbb65f" Mar 18 09:57:30.711972 master-0 kubenswrapper[8244]: I0318 09:57:30.711941 8244 scope.go:117] "RemoveContainer" containerID="21ea6abc98e78a0444eb255d9f1edf6ce13e5e0f11a1d4b38c35dd0e5e280fcf" Mar 18 09:57:31.384443 master-0 kubenswrapper[8244]: I0318 09:57:31.384343 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-lk698_ec53d7fa-445b-4e1d-84ef-545f08e80ccc/kube-storage-version-migrator-operator/1.log" Mar 18 09:57:31.386697 master-0 kubenswrapper[8244]: I0318 09:57:31.386652 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:57:31.388568 master-0 kubenswrapper[8244]: I0318 09:57:31.388521 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-g25jq_3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/openshift-controller-manager-operator/1.log" Mar 18 09:57:31.391902 master-0 kubenswrapper[8244]: I0318 09:57:31.390882 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-smghb_6a6a616d-012a-479e-ab3d-b21295ea1805/kube-apiserver-operator/1.log" Mar 18 09:57:31.392934 master-0 kubenswrapper[8244]: I0318 09:57:31.392895 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-4q9tr_f076eaf0-b041-4db0-ba06-3d85e23bb654/authentication-operator/1.log" Mar 18 09:57:31.394664 master-0 kubenswrapper[8244]: I0318 09:57:31.394624 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-pgtbr_bb35841e-d992-4044-aaaa-06c9faf47bd0/service-ca-operator/1.log" Mar 18 09:57:31.396573 master-0 kubenswrapper[8244]: I0318 09:57:31.396524 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vj8tt_3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/kube-scheduler-operator-container/1.log" Mar 18 09:57:31.398227 master-0 kubenswrapper[8244]: I0318 09:57:31.398179 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-8srnz_9ccdc221-4ec5-487e-8ec4-85284ed628d8/network-operator/1.log" Mar 18 09:57:31.400181 master-0 kubenswrapper[8244]: I0318 09:57:31.400145 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-495pg_0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/openshift-config-operator/1.log" Mar 18 09:57:31.400614 master-0 kubenswrapper[8244]: I0318 09:57:31.400554 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerStarted","Data":"28649efad05eac5b0f41333b14d359f00b8f30fb75f4db907f9a07ca5b91b9da"} Mar 18 09:57:31.400733 master-0 kubenswrapper[8244]: I0318 09:57:31.400697 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:57:31.402133 master-0 kubenswrapper[8244]: I0318 09:57:31.402068 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-zz68c_0d72e695-0183-4ee8-8add-5425e67f7138/openshift-apiserver-operator/1.log" Mar 18 09:57:31.403678 master-0 kubenswrapper[8244]: I0318 09:57:31.403627 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-pzqqc_0999f781-3299-4cb6-ba76-2a4f4584c685/kube-controller-manager-operator/1.log" Mar 18 09:57:31.405369 master-0 kubenswrapper[8244]: I0318 09:57:31.405331 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/1.log" Mar 18 09:57:31.462477 master-0 kubenswrapper[8244]: I0318 09:57:31.462408 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:57:31.471915 master-0 kubenswrapper[8244]: I0318 09:57:31.471800 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:57:31.743370 master-0 kubenswrapper[8244]: I0318 09:57:31.743303 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:57:31.748419 master-0 kubenswrapper[8244]: I0318 09:57:31.748370 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83737980b9ee109184b1d78e942cf36" path="/var/lib/kubelet/pods/c83737980b9ee109184b1d78e942cf36/volumes" Mar 18 09:57:31.748993 master-0 kubenswrapper[8244]: I0318 09:57:31.748956 8244 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 18 09:57:31.767992 master-0 kubenswrapper[8244]: I0318 09:57:31.766407 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:57:31.767992 master-0 kubenswrapper[8244]: I0318 09:57:31.766459 8244 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="84950359-88a0-4ada-9d8d-a11326b2957d" Mar 18 09:57:31.770014 master-0 kubenswrapper[8244]: I0318 09:57:31.769975 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:57:31.770014 master-0 kubenswrapper[8244]: I0318 09:57:31.770005 8244 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="84950359-88a0-4ada-9d8d-a11326b2957d" Mar 18 09:57:31.786647 master-0 kubenswrapper[8244]: I0318 09:57:31.785426 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-var-lock\") pod \"54a208d1-afe8-49b5-92e0-e27afb4abc80\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " Mar 18 09:57:31.786647 master-0 kubenswrapper[8244]: I0318 09:57:31.785548 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-kubelet-dir\") pod \"54a208d1-afe8-49b5-92e0-e27afb4abc80\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " Mar 18 09:57:31.786647 master-0 kubenswrapper[8244]: I0318 09:57:31.785629 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access\") pod \"54a208d1-afe8-49b5-92e0-e27afb4abc80\" (UID: \"54a208d1-afe8-49b5-92e0-e27afb4abc80\") " Mar 18 09:57:31.786647 master-0 kubenswrapper[8244]: I0318 09:57:31.785895 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-var-lock" (OuterVolumeSpecName: "var-lock") pod "54a208d1-afe8-49b5-92e0-e27afb4abc80" (UID: "54a208d1-afe8-49b5-92e0-e27afb4abc80"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:31.786647 master-0 kubenswrapper[8244]: I0318 09:57:31.785928 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "54a208d1-afe8-49b5-92e0-e27afb4abc80" (UID: "54a208d1-afe8-49b5-92e0-e27afb4abc80"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:31.786647 master-0 kubenswrapper[8244]: I0318 09:57:31.786028 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:31.786647 master-0 kubenswrapper[8244]: I0318 09:57:31.786072 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54a208d1-afe8-49b5-92e0-e27afb4abc80-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:31.788902 master-0 kubenswrapper[8244]: I0318 09:57:31.788808 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "54a208d1-afe8-49b5-92e0-e27afb4abc80" (UID: "54a208d1-afe8-49b5-92e0-e27afb4abc80"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:57:31.887736 master-0 kubenswrapper[8244]: I0318 09:57:31.887644 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54a208d1-afe8-49b5-92e0-e27afb4abc80-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:32.413851 master-0 kubenswrapper[8244]: I0318 09:57:32.413762 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"54a208d1-afe8-49b5-92e0-e27afb4abc80","Type":"ContainerDied","Data":"412e9b55f8faac02229faa1064ae91e5d24b587483498fa55a3224e6f756199c"} Mar 18 09:57:32.414105 master-0 kubenswrapper[8244]: I0318 09:57:32.413877 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412e9b55f8faac02229faa1064ae91e5d24b587483498fa55a3224e6f756199c" Mar 18 09:57:32.414105 master-0 kubenswrapper[8244]: I0318 09:57:32.413914 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:57:32.652681 master-0 kubenswrapper[8244]: I0318 09:57:32.652624 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:57:32.661133 master-0 kubenswrapper[8244]: I0318 09:57:32.656519 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:57:32.670137 master-0 kubenswrapper[8244]: I0318 09:57:32.664765 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 09:57:33.191599 master-0 kubenswrapper[8244]: I0318 09:57:33.191489 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 09:57:33.422122 master-0 kubenswrapper[8244]: I0318 09:57:33.422064 8244 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="47003cd7242b25a319c29a44ee35ea3c35fda83145ceddfb4905fe01131e1a69" exitCode=0 Mar 18 09:57:33.422714 master-0 kubenswrapper[8244]: I0318 09:57:33.422676 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"47003cd7242b25a319c29a44ee35ea3c35fda83145ceddfb4905fe01131e1a69"} Mar 18 09:57:33.422787 master-0 kubenswrapper[8244]: I0318 09:57:33.422714 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"b5bf3da90da776e6c122f127625565a6fdc3ad79ed5366d030c0c0ccb65f53d0"} Mar 18 09:57:33.429320 master-0 kubenswrapper[8244]: I0318 09:57:33.429251 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.429231705 podStartE2EDuration="1.429231705s" podCreationTimestamp="2026-03-18 09:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:57:33.428574039 +0000 UTC m=+169.908310177" watchObservedRunningTime="2026-03-18 09:57:33.429231705 +0000 UTC m=+169.908967833" Mar 18 09:57:33.498516 master-0 kubenswrapper[8244]: I0318 09:57:33.498404 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 09:57:33.499094 master-0 kubenswrapper[8244]: I0318 09:57:33.499060 8244 scope.go:117] "RemoveContainer" containerID="7899027579e9cd9f7fcc12484390d733833facf13d02a5193e75c23ee942e285" Mar 18 09:57:33.499309 master-0 kubenswrapper[8244]: E0318 09:57:33.499266 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=authentication-operator pod=authentication-operator-5885bfd7f4-4q9tr_openshift-authentication-operator(f076eaf0-b041-4db0-ba06-3d85e23bb654)\"" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" podUID="f076eaf0-b041-4db0-ba06-3d85e23bb654" Mar 18 09:57:33.951104 master-0 kubenswrapper[8244]: I0318 09:57:33.950583 8244 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 09:57:33.951104 master-0 kubenswrapper[8244]: I0318 09:57:33.950806 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" containerID="cri-o://a8a79bb9813c53d6a7944ac3a61efc1cc0406057f3915265e59c26643cc48a9e" gracePeriod=30 Mar 18 09:57:33.951104 master-0 kubenswrapper[8244]: I0318 09:57:33.950957 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://459fcfb70fb899949af51fd621c6c7e3b1b5510c468c992c115b7f0303ef5eb8" gracePeriod=30 Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.951977 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: E0318 09:57:33.952150 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952163 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: E0318 09:57:33.952175 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952183 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: E0318 09:57:33.952196 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54a208d1-afe8-49b5-92e0-e27afb4abc80" containerName="installer" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952204 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="54a208d1-afe8-49b5-92e0-e27afb4abc80" containerName="installer" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: E0318 09:57:33.952213 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952221 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: E0318 09:57:33.952234 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952241 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952330 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952347 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952364 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="54a208d1-afe8-49b5-92e0-e27afb4abc80" containerName="installer" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952374 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: E0318 09:57:33.952456 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952465 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.952601 master-0 kubenswrapper[8244]: I0318 09:57:33.952545 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.961547 master-0 kubenswrapper[8244]: I0318 09:57:33.956175 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:57:33.961547 master-0 kubenswrapper[8244]: I0318 09:57:33.956805 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:34.019183 master-0 kubenswrapper[8244]: I0318 09:57:34.019128 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b82be17f9a809bd5efbd88c0026e8713\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:34.019412 master-0 kubenswrapper[8244]: I0318 09:57:34.019209 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b82be17f9a809bd5efbd88c0026e8713\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:34.125892 master-0 kubenswrapper[8244]: I0318 09:57:34.120111 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b82be17f9a809bd5efbd88c0026e8713\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:34.125892 master-0 kubenswrapper[8244]: I0318 09:57:34.120187 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b82be17f9a809bd5efbd88c0026e8713\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:34.125892 master-0 kubenswrapper[8244]: I0318 09:57:34.120269 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b82be17f9a809bd5efbd88c0026e8713\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:34.125892 master-0 kubenswrapper[8244]: I0318 09:57:34.120312 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"b82be17f9a809bd5efbd88c0026e8713\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:34.133345 master-0 kubenswrapper[8244]: I0318 09:57:34.130626 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:34.134203 master-0 kubenswrapper[8244]: I0318 09:57:34.134159 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:57:34.144506 master-0 kubenswrapper[8244]: I0318 09:57:34.144452 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:57:34.175429 master-0 kubenswrapper[8244]: I0318 09:57:34.175370 8244 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="25723a8c-3aeb-48bf-96ca-b3a1e7e388ce" Mar 18 09:57:34.221196 master-0 kubenswrapper[8244]: I0318 09:57:34.221149 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221202 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221264 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221312 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221299 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221343 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221365 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221361 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets" (OuterVolumeSpecName: "secrets") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221394 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config" (OuterVolumeSpecName: "config") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221459 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs" (OuterVolumeSpecName: "logs") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221565 8244 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221580 8244 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221594 8244 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221607 8244 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:34.221660 master-0 kubenswrapper[8244]: I0318 09:57:34.221638 8244 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:34.294249 master-0 kubenswrapper[8244]: I0318 09:57:34.293925 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nzqck"] Mar 18 09:57:34.295106 master-0 kubenswrapper[8244]: I0318 09:57:34.295081 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.297225 master-0 kubenswrapper[8244]: I0318 09:57:34.297153 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2w9kv" Mar 18 09:57:34.301072 master-0 kubenswrapper[8244]: I0318 09:57:34.300857 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8w5rc"] Mar 18 09:57:34.302387 master-0 kubenswrapper[8244]: I0318 09:57:34.302348 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.306588 master-0 kubenswrapper[8244]: I0318 09:57:34.306527 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jl7c8"] Mar 18 09:57:34.307124 master-0 kubenswrapper[8244]: I0318 09:57:34.307084 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-wqrrj" Mar 18 09:57:34.311511 master-0 kubenswrapper[8244]: I0318 09:57:34.311467 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pdfn6"] Mar 18 09:57:34.315519 master-0 kubenswrapper[8244]: I0318 09:57:34.315484 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.317229 master-0 kubenswrapper[8244]: I0318 09:57:34.317200 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-gllg9" Mar 18 09:57:34.324486 master-0 kubenswrapper[8244]: I0318 09:57:34.324445 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-utilities\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.324623 master-0 kubenswrapper[8244]: I0318 09:57:34.324527 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-catalog-content\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.324623 master-0 kubenswrapper[8244]: I0318 09:57:34.324559 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qn5\" (UniqueName: \"kubernetes.io/projected/db376fea-5756-4bc2-9685-f32730b5a6f7-kube-api-access-r6qn5\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.334520 master-0 kubenswrapper[8244]: I0318 09:57:34.329457 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzqck"] Mar 18 09:57:34.334520 master-0 kubenswrapper[8244]: I0318 09:57:34.329504 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8w5rc"] Mar 18 09:57:34.334520 master-0 kubenswrapper[8244]: I0318 09:57:34.329523 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl7c8"] Mar 18 09:57:34.334520 master-0 kubenswrapper[8244]: I0318 09:57:34.329703 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.334520 master-0 kubenswrapper[8244]: I0318 09:57:34.331719 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-cgqlv" Mar 18 09:57:34.334520 master-0 kubenswrapper[8244]: I0318 09:57:34.332063 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdfn6"] Mar 18 09:57:34.428239 master-0 kubenswrapper[8244]: I0318 09:57:34.428182 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-catalog-content\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.428239 master-0 kubenswrapper[8244]: I0318 09:57:34.428234 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29kr\" (UniqueName: \"kubernetes.io/projected/0945a421-d7c4-46df-b3d9-507443627d51-kube-api-access-k29kr\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.428441 master-0 kubenswrapper[8244]: I0318 09:57:34.428260 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-utilities\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.428441 master-0 kubenswrapper[8244]: I0318 09:57:34.428322 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-utilities\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.428441 master-0 kubenswrapper[8244]: I0318 09:57:34.428421 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-catalog-content\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.428564 master-0 kubenswrapper[8244]: I0318 09:57:34.428461 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-catalog-content\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.428564 master-0 kubenswrapper[8244]: I0318 09:57:34.428499 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-utilities\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.428564 master-0 kubenswrapper[8244]: I0318 09:57:34.428520 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-utilities\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.428785 master-0 kubenswrapper[8244]: I0318 09:57:34.428747 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-utilities\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.428964 master-0 kubenswrapper[8244]: I0318 09:57:34.428928 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzzjs\" (UniqueName: \"kubernetes.io/projected/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-kube-api-access-wzzjs\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.429027 master-0 kubenswrapper[8244]: I0318 09:57:34.428970 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5j9d\" (UniqueName: \"kubernetes.io/projected/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-kube-api-access-l5j9d\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.429027 master-0 kubenswrapper[8244]: I0318 09:57:34.428997 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-catalog-content\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.429117 master-0 kubenswrapper[8244]: I0318 09:57:34.429040 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6qn5\" (UniqueName: \"kubernetes.io/projected/db376fea-5756-4bc2-9685-f32730b5a6f7-kube-api-access-r6qn5\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.429491 master-0 kubenswrapper[8244]: I0318 09:57:34.429449 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-catalog-content\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.430435 master-0 kubenswrapper[8244]: I0318 09:57:34.430389 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"cd19f6008d757f0df145410d19ef8a0a4892b1a9570868a0f25d4db947985c0d"} Mar 18 09:57:34.430500 master-0 kubenswrapper[8244]: I0318 09:57:34.430443 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"7c521115ddea902792bf48e852856b512a5618ac1e205481b00a57548b627114"} Mar 18 09:57:34.430500 master-0 kubenswrapper[8244]: I0318 09:57:34.430466 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"98158274131e9b1c448b325fae48722d74ef93130547141c9b0a75c46c204334"} Mar 18 09:57:34.430721 master-0 kubenswrapper[8244]: I0318 09:57:34.430668 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:57:34.433616 master-0 kubenswrapper[8244]: I0318 09:57:34.433556 8244 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="459fcfb70fb899949af51fd621c6c7e3b1b5510c468c992c115b7f0303ef5eb8" exitCode=0 Mar 18 09:57:34.433616 master-0 kubenswrapper[8244]: I0318 09:57:34.433584 8244 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="a8a79bb9813c53d6a7944ac3a61efc1cc0406057f3915265e59c26643cc48a9e" exitCode=0 Mar 18 09:57:34.433907 master-0 kubenswrapper[8244]: I0318 09:57:34.433639 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2ad4fe81a1a347f10f858030eebc98abfffaf65eba926cffe2c8990ddb0614" Mar 18 09:57:34.433907 master-0 kubenswrapper[8244]: I0318 09:57:34.433640 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:57:34.433907 master-0 kubenswrapper[8244]: I0318 09:57:34.433666 8244 scope.go:117] "RemoveContainer" containerID="591676dbc66d002b41b1524c76dbc54235b7dd32e488240aec01e853c0930dc0" Mar 18 09:57:34.439507 master-0 kubenswrapper[8244]: I0318 09:57:34.439465 8244 generic.go:334] "Generic (PLEG): container finished" podID="a4d7edd6-7975-468e-adea-138d92ed1be1" containerID="3a3c8396e15ffcccb1d7182e3eb6dbd5c5cf86adc58a45d80d2016b54dbad828" exitCode=0 Mar 18 09:57:34.439735 master-0 kubenswrapper[8244]: I0318 09:57:34.439540 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"a4d7edd6-7975-468e-adea-138d92ed1be1","Type":"ContainerDied","Data":"3a3c8396e15ffcccb1d7182e3eb6dbd5c5cf86adc58a45d80d2016b54dbad828"} Mar 18 09:57:34.442960 master-0 kubenswrapper[8244]: I0318 09:57:34.442328 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b82be17f9a809bd5efbd88c0026e8713","Type":"ContainerStarted","Data":"addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c"} Mar 18 09:57:34.442960 master-0 kubenswrapper[8244]: I0318 09:57:34.442410 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b82be17f9a809bd5efbd88c0026e8713","Type":"ContainerStarted","Data":"d063a0d92cbe3d5d6367eb94c917d657ef180467eaf86cd1b557d2c9341bdb9f"} Mar 18 09:57:34.452852 master-0 kubenswrapper[8244]: I0318 09:57:34.452801 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6qn5\" (UniqueName: \"kubernetes.io/projected/db376fea-5756-4bc2-9685-f32730b5a6f7-kube-api-access-r6qn5\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.459742 master-0 kubenswrapper[8244]: I0318 09:57:34.459681 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.459662557 podStartE2EDuration="2.459662557s" podCreationTimestamp="2026-03-18 09:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:57:34.453549498 +0000 UTC m=+170.933285626" watchObservedRunningTime="2026-03-18 09:57:34.459662557 +0000 UTC m=+170.939398685" Mar 18 09:57:34.530463 master-0 kubenswrapper[8244]: I0318 09:57:34.530393 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzzjs\" (UniqueName: \"kubernetes.io/projected/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-kube-api-access-wzzjs\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.530463 master-0 kubenswrapper[8244]: I0318 09:57:34.530459 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5j9d\" (UniqueName: \"kubernetes.io/projected/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-kube-api-access-l5j9d\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.530655 master-0 kubenswrapper[8244]: I0318 09:57:34.530523 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-catalog-content\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.530655 master-0 kubenswrapper[8244]: I0318 09:57:34.530555 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k29kr\" (UniqueName: \"kubernetes.io/projected/0945a421-d7c4-46df-b3d9-507443627d51-kube-api-access-k29kr\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.530655 master-0 kubenswrapper[8244]: I0318 09:57:34.530581 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-utilities\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.530655 master-0 kubenswrapper[8244]: I0318 09:57:34.530612 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-catalog-content\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.530655 master-0 kubenswrapper[8244]: I0318 09:57:34.530633 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-catalog-content\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.530814 master-0 kubenswrapper[8244]: I0318 09:57:34.530658 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-utilities\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.530814 master-0 kubenswrapper[8244]: I0318 09:57:34.530679 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-utilities\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.531273 master-0 kubenswrapper[8244]: I0318 09:57:34.531232 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-utilities\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.531724 master-0 kubenswrapper[8244]: I0318 09:57:34.531694 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-catalog-content\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.532319 master-0 kubenswrapper[8244]: I0318 09:57:34.532282 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-catalog-content\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.532355 master-0 kubenswrapper[8244]: I0318 09:57:34.532334 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-utilities\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.532541 master-0 kubenswrapper[8244]: I0318 09:57:34.532515 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-catalog-content\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.533631 master-0 kubenswrapper[8244]: I0318 09:57:34.533587 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-utilities\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.549472 master-0 kubenswrapper[8244]: I0318 09:57:34.549416 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5j9d\" (UniqueName: \"kubernetes.io/projected/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-kube-api-access-l5j9d\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:34.554093 master-0 kubenswrapper[8244]: I0318 09:57:34.554050 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k29kr\" (UniqueName: \"kubernetes.io/projected/0945a421-d7c4-46df-b3d9-507443627d51-kube-api-access-k29kr\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.557667 master-0 kubenswrapper[8244]: I0318 09:57:34.557594 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzzjs\" (UniqueName: \"kubernetes.io/projected/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-kube-api-access-wzzjs\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.653474 master-0 kubenswrapper[8244]: I0318 09:57:34.653344 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:34.678470 master-0 kubenswrapper[8244]: I0318 09:57:34.678415 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:34.688021 master-0 kubenswrapper[8244]: I0318 09:57:34.687445 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:34.707856 master-0 kubenswrapper[8244]: I0318 09:57:34.707397 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:35.100994 master-0 kubenswrapper[8244]: I0318 09:57:35.100033 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzqck"] Mar 18 09:57:35.194591 master-0 kubenswrapper[8244]: I0318 09:57:35.194526 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdfn6"] Mar 18 09:57:35.199542 master-0 kubenswrapper[8244]: W0318 09:57:35.199498 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9c87410_8689_4884_b5a8_df3ecbb7f1a4.slice/crio-9fee5c93850116cedccb29b440cbb9d64b2e4cc6c4a2b7baa36f936fc07adce9 WatchSource:0}: Error finding container 9fee5c93850116cedccb29b440cbb9d64b2e4cc6c4a2b7baa36f936fc07adce9: Status 404 returned error can't find the container with id 9fee5c93850116cedccb29b440cbb9d64b2e4cc6c4a2b7baa36f936fc07adce9 Mar 18 09:57:35.254280 master-0 kubenswrapper[8244]: I0318 09:57:35.254209 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8w5rc"] Mar 18 09:57:35.266654 master-0 kubenswrapper[8244]: I0318 09:57:35.266614 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl7c8"] Mar 18 09:57:35.324633 master-0 kubenswrapper[8244]: W0318 09:57:35.324477 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bc2b4ba_35ac_4d2d_adb9_362a6c0eb6a7.slice/crio-0ab9786ebf50a65e9432d654c3f52392db8e881a65fb26e7e3e002f1d0577eeb WatchSource:0}: Error finding container 0ab9786ebf50a65e9432d654c3f52392db8e881a65fb26e7e3e002f1d0577eeb: Status 404 returned error can't find the container with id 0ab9786ebf50a65e9432d654c3f52392db8e881a65fb26e7e3e002f1d0577eeb Mar 18 09:57:35.326311 master-0 kubenswrapper[8244]: W0318 09:57:35.326267 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0945a421_d7c4_46df_b3d9_507443627d51.slice/crio-94d378b5868ac49c0d516b9285e21a09fb0d6dca212ba5b79072685e6b662578 WatchSource:0}: Error finding container 94d378b5868ac49c0d516b9285e21a09fb0d6dca212ba5b79072685e6b662578: Status 404 returned error can't find the container with id 94d378b5868ac49c0d516b9285e21a09fb0d6dca212ba5b79072685e6b662578 Mar 18 09:57:35.452680 master-0 kubenswrapper[8244]: I0318 09:57:35.452578 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b82be17f9a809bd5efbd88c0026e8713","Type":"ContainerStarted","Data":"f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f"} Mar 18 09:57:35.452680 master-0 kubenswrapper[8244]: I0318 09:57:35.452647 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b82be17f9a809bd5efbd88c0026e8713","Type":"ContainerStarted","Data":"5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1"} Mar 18 09:57:35.452680 master-0 kubenswrapper[8244]: I0318 09:57:35.452667 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"b82be17f9a809bd5efbd88c0026e8713","Type":"ContainerStarted","Data":"81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3"} Mar 18 09:57:35.457539 master-0 kubenswrapper[8244]: I0318 09:57:35.457502 8244 generic.go:334] "Generic (PLEG): container finished" podID="db376fea-5756-4bc2-9685-f32730b5a6f7" containerID="3895b0bbebe711b5e51fd8fde77e2f404e00d676164e6f589e15a4b9e9bdc150" exitCode=0 Mar 18 09:57:35.457677 master-0 kubenswrapper[8244]: I0318 09:57:35.457592 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzqck" event={"ID":"db376fea-5756-4bc2-9685-f32730b5a6f7","Type":"ContainerDied","Data":"3895b0bbebe711b5e51fd8fde77e2f404e00d676164e6f589e15a4b9e9bdc150"} Mar 18 09:57:35.457801 master-0 kubenswrapper[8244]: I0318 09:57:35.457695 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzqck" event={"ID":"db376fea-5756-4bc2-9685-f32730b5a6f7","Type":"ContainerStarted","Data":"cc949f0d8f85c68fa457f1194d4c5e8aa9bf8a96548dfb4976d04f8be5a7a9b6"} Mar 18 09:57:35.459368 master-0 kubenswrapper[8244]: I0318 09:57:35.459337 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7c8" event={"ID":"0945a421-d7c4-46df-b3d9-507443627d51","Type":"ContainerStarted","Data":"94d378b5868ac49c0d516b9285e21a09fb0d6dca212ba5b79072685e6b662578"} Mar 18 09:57:35.462517 master-0 kubenswrapper[8244]: I0318 09:57:35.462486 8244 generic.go:334] "Generic (PLEG): container finished" podID="b9c87410-8689-4884-b5a8-df3ecbb7f1a4" containerID="6e2ac2ef1c2d040695f9086d50b707203dabf820029ae8a9e577f8116338d92f" exitCode=0 Mar 18 09:57:35.462595 master-0 kubenswrapper[8244]: I0318 09:57:35.462519 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdfn6" event={"ID":"b9c87410-8689-4884-b5a8-df3ecbb7f1a4","Type":"ContainerDied","Data":"6e2ac2ef1c2d040695f9086d50b707203dabf820029ae8a9e577f8116338d92f"} Mar 18 09:57:35.462595 master-0 kubenswrapper[8244]: I0318 09:57:35.462554 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdfn6" event={"ID":"b9c87410-8689-4884-b5a8-df3ecbb7f1a4","Type":"ContainerStarted","Data":"9fee5c93850116cedccb29b440cbb9d64b2e4cc6c4a2b7baa36f936fc07adce9"} Mar 18 09:57:35.464317 master-0 kubenswrapper[8244]: I0318 09:57:35.464286 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8w5rc" event={"ID":"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7","Type":"ContainerStarted","Data":"0ab9786ebf50a65e9432d654c3f52392db8e881a65fb26e7e3e002f1d0577eeb"} Mar 18 09:57:35.594246 master-0 kubenswrapper[8244]: I0318 09:57:35.592419 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=1.59240608 podStartE2EDuration="1.59240608s" podCreationTimestamp="2026-03-18 09:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:57:35.591734673 +0000 UTC m=+172.071470801" watchObservedRunningTime="2026-03-18 09:57:35.59240608 +0000 UTC m=+172.072142208" Mar 18 09:57:35.732881 master-0 kubenswrapper[8244]: I0318 09:57:35.732813 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:57:35.743474 master-0 kubenswrapper[8244]: I0318 09:57:35.743417 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f265536aba6292ead501bc9b49f327" path="/var/lib/kubelet/pods/46f265536aba6292ead501bc9b49f327/volumes" Mar 18 09:57:35.743974 master-0 kubenswrapper[8244]: I0318 09:57:35.743939 8244 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 18 09:57:35.771886 master-0 kubenswrapper[8244]: I0318 09:57:35.771791 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 09:57:35.771886 master-0 kubenswrapper[8244]: I0318 09:57:35.771872 8244 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="25723a8c-3aeb-48bf-96ca-b3a1e7e388ce" Mar 18 09:57:35.772097 master-0 kubenswrapper[8244]: I0318 09:57:35.771904 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 09:57:35.772097 master-0 kubenswrapper[8244]: I0318 09:57:35.771922 8244 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="25723a8c-3aeb-48bf-96ca-b3a1e7e388ce" Mar 18 09:57:35.868700 master-0 kubenswrapper[8244]: I0318 09:57:35.868613 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-var-lock\") pod \"a4d7edd6-7975-468e-adea-138d92ed1be1\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " Mar 18 09:57:35.868949 master-0 kubenswrapper[8244]: I0318 09:57:35.868716 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-kubelet-dir\") pod \"a4d7edd6-7975-468e-adea-138d92ed1be1\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " Mar 18 09:57:35.868949 master-0 kubenswrapper[8244]: I0318 09:57:35.868808 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d7edd6-7975-468e-adea-138d92ed1be1-kube-api-access\") pod \"a4d7edd6-7975-468e-adea-138d92ed1be1\" (UID: \"a4d7edd6-7975-468e-adea-138d92ed1be1\") " Mar 18 09:57:35.868949 master-0 kubenswrapper[8244]: I0318 09:57:35.868864 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a4d7edd6-7975-468e-adea-138d92ed1be1" (UID: "a4d7edd6-7975-468e-adea-138d92ed1be1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:35.868949 master-0 kubenswrapper[8244]: I0318 09:57:35.868873 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-var-lock" (OuterVolumeSpecName: "var-lock") pod "a4d7edd6-7975-468e-adea-138d92ed1be1" (UID: "a4d7edd6-7975-468e-adea-138d92ed1be1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:57:35.869271 master-0 kubenswrapper[8244]: I0318 09:57:35.869230 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:35.869309 master-0 kubenswrapper[8244]: I0318 09:57:35.869268 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d7edd6-7975-468e-adea-138d92ed1be1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:35.873760 master-0 kubenswrapper[8244]: I0318 09:57:35.873701 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d7edd6-7975-468e-adea-138d92ed1be1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a4d7edd6-7975-468e-adea-138d92ed1be1" (UID: "a4d7edd6-7975-468e-adea-138d92ed1be1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:57:35.969962 master-0 kubenswrapper[8244]: I0318 09:57:35.969915 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4d7edd6-7975-468e-adea-138d92ed1be1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:57:36.470845 master-0 kubenswrapper[8244]: I0318 09:57:36.470790 8244 generic.go:334] "Generic (PLEG): container finished" podID="0945a421-d7c4-46df-b3d9-507443627d51" containerID="8f448cb12e0cc4fb34d60ad284a20b2c9aca8ec622e43fb96e75a5f038812980" exitCode=0 Mar 18 09:57:36.471393 master-0 kubenswrapper[8244]: I0318 09:57:36.470880 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7c8" event={"ID":"0945a421-d7c4-46df-b3d9-507443627d51","Type":"ContainerDied","Data":"8f448cb12e0cc4fb34d60ad284a20b2c9aca8ec622e43fb96e75a5f038812980"} Mar 18 09:57:36.485029 master-0 kubenswrapper[8244]: I0318 09:57:36.476835 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdfn6" event={"ID":"b9c87410-8689-4884-b5a8-df3ecbb7f1a4","Type":"ContainerStarted","Data":"e449b47779a9d7dba0806705cf39954c432c7970c3371ed0b172d5bc7722060d"} Mar 18 09:57:36.485029 master-0 kubenswrapper[8244]: I0318 09:57:36.478443 8244 generic.go:334] "Generic (PLEG): container finished" podID="1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7" containerID="d8336fe95d751b483d2ff986081042be8fc84379e88cfb3baaea2d45717c14ee" exitCode=0 Mar 18 09:57:36.485029 master-0 kubenswrapper[8244]: I0318 09:57:36.478502 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8w5rc" event={"ID":"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7","Type":"ContainerDied","Data":"d8336fe95d751b483d2ff986081042be8fc84379e88cfb3baaea2d45717c14ee"} Mar 18 09:57:36.485029 master-0 kubenswrapper[8244]: I0318 09:57:36.479966 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzqck" event={"ID":"db376fea-5756-4bc2-9685-f32730b5a6f7","Type":"ContainerStarted","Data":"8a4454e2a9f9cbf1f5dc18fe41a00327026fa7988233c2ea2c84ec074c1b0faf"} Mar 18 09:57:36.485029 master-0 kubenswrapper[8244]: I0318 09:57:36.481347 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:57:36.485029 master-0 kubenswrapper[8244]: I0318 09:57:36.481378 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"a4d7edd6-7975-468e-adea-138d92ed1be1","Type":"ContainerDied","Data":"306e8c3b294ebc0b6118bec332d25f893bead6bde2beb01fbece7b1ede0478ae"} Mar 18 09:57:36.485029 master-0 kubenswrapper[8244]: I0318 09:57:36.481396 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="306e8c3b294ebc0b6118bec332d25f893bead6bde2beb01fbece7b1ede0478ae" Mar 18 09:57:37.491517 master-0 kubenswrapper[8244]: I0318 09:57:37.490982 8244 generic.go:334] "Generic (PLEG): container finished" podID="b9c87410-8689-4884-b5a8-df3ecbb7f1a4" containerID="e449b47779a9d7dba0806705cf39954c432c7970c3371ed0b172d5bc7722060d" exitCode=0 Mar 18 09:57:37.491517 master-0 kubenswrapper[8244]: I0318 09:57:37.491053 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdfn6" event={"ID":"b9c87410-8689-4884-b5a8-df3ecbb7f1a4","Type":"ContainerDied","Data":"e449b47779a9d7dba0806705cf39954c432c7970c3371ed0b172d5bc7722060d"} Mar 18 09:57:37.492992 master-0 kubenswrapper[8244]: I0318 09:57:37.492429 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8w5rc" event={"ID":"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7","Type":"ContainerStarted","Data":"21100e562902d6efca61425bd34ddb104507d8d781f4e3a980d72c66d6282ba6"} Mar 18 09:57:37.496284 master-0 kubenswrapper[8244]: I0318 09:57:37.496238 8244 generic.go:334] "Generic (PLEG): container finished" podID="db376fea-5756-4bc2-9685-f32730b5a6f7" containerID="8a4454e2a9f9cbf1f5dc18fe41a00327026fa7988233c2ea2c84ec074c1b0faf" exitCode=0 Mar 18 09:57:37.496284 master-0 kubenswrapper[8244]: I0318 09:57:37.496291 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzqck" event={"ID":"db376fea-5756-4bc2-9685-f32730b5a6f7","Type":"ContainerDied","Data":"8a4454e2a9f9cbf1f5dc18fe41a00327026fa7988233c2ea2c84ec074c1b0faf"} Mar 18 09:57:38.507082 master-0 kubenswrapper[8244]: I0318 09:57:38.506966 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7c8" event={"ID":"0945a421-d7c4-46df-b3d9-507443627d51","Type":"ContainerStarted","Data":"1eff62cc27e434fd50cb63f04471e39fb7819f214071bd5d5eb17564061f1baa"} Mar 18 09:57:38.512023 master-0 kubenswrapper[8244]: I0318 09:57:38.511938 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdfn6" event={"ID":"b9c87410-8689-4884-b5a8-df3ecbb7f1a4","Type":"ContainerStarted","Data":"6f4027ac65186ca2cdba4e617d7733d67ff023877c3b2863f86bcce040830d49"} Mar 18 09:57:38.518632 master-0 kubenswrapper[8244]: I0318 09:57:38.518570 8244 generic.go:334] "Generic (PLEG): container finished" podID="1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7" containerID="21100e562902d6efca61425bd34ddb104507d8d781f4e3a980d72c66d6282ba6" exitCode=0 Mar 18 09:57:38.518726 master-0 kubenswrapper[8244]: I0318 09:57:38.518637 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8w5rc" event={"ID":"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7","Type":"ContainerDied","Data":"21100e562902d6efca61425bd34ddb104507d8d781f4e3a980d72c66d6282ba6"} Mar 18 09:57:39.525401 master-0 kubenswrapper[8244]: I0318 09:57:39.525346 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzqck" event={"ID":"db376fea-5756-4bc2-9685-f32730b5a6f7","Type":"ContainerStarted","Data":"19f428b3489c3e490d0b4e3b80b307299d4634a21b7c7092972388ea1d1fc574"} Mar 18 09:57:39.527701 master-0 kubenswrapper[8244]: I0318 09:57:39.527667 8244 generic.go:334] "Generic (PLEG): container finished" podID="0945a421-d7c4-46df-b3d9-507443627d51" containerID="1eff62cc27e434fd50cb63f04471e39fb7819f214071bd5d5eb17564061f1baa" exitCode=0 Mar 18 09:57:39.527791 master-0 kubenswrapper[8244]: I0318 09:57:39.527766 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7c8" event={"ID":"0945a421-d7c4-46df-b3d9-507443627d51","Type":"ContainerDied","Data":"1eff62cc27e434fd50cb63f04471e39fb7819f214071bd5d5eb17564061f1baa"} Mar 18 09:57:39.531156 master-0 kubenswrapper[8244]: I0318 09:57:39.531112 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8w5rc" event={"ID":"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7","Type":"ContainerStarted","Data":"43e3d97e40e43fbea33f9f0cf89041ec9a9648cb2e97f9007f14ba4283b62e7d"} Mar 18 09:57:39.557039 master-0 kubenswrapper[8244]: I0318 09:57:39.556956 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nzqck" podStartSLOduration=3.632406957 podStartE2EDuration="6.556939792s" podCreationTimestamp="2026-03-18 09:57:33 +0000 UTC" firstStartedPulling="2026-03-18 09:57:35.459132144 +0000 UTC m=+171.938868272" lastFinishedPulling="2026-03-18 09:57:38.383664929 +0000 UTC m=+174.863401107" observedRunningTime="2026-03-18 09:57:39.554623625 +0000 UTC m=+176.034359753" watchObservedRunningTime="2026-03-18 09:57:39.556939792 +0000 UTC m=+176.036675920" Mar 18 09:57:39.600227 master-0 kubenswrapper[8244]: I0318 09:57:39.600167 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pdfn6" podStartSLOduration=4.755840432 podStartE2EDuration="7.600151587s" podCreationTimestamp="2026-03-18 09:57:32 +0000 UTC" firstStartedPulling="2026-03-18 09:57:35.463923101 +0000 UTC m=+171.943659229" lastFinishedPulling="2026-03-18 09:57:38.308234256 +0000 UTC m=+174.787970384" observedRunningTime="2026-03-18 09:57:39.598546988 +0000 UTC m=+176.078283116" watchObservedRunningTime="2026-03-18 09:57:39.600151587 +0000 UTC m=+176.079887715" Mar 18 09:57:39.620473 master-0 kubenswrapper[8244]: I0318 09:57:39.620417 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8w5rc" podStartSLOduration=5.154680886 podStartE2EDuration="7.620403112s" podCreationTimestamp="2026-03-18 09:57:32 +0000 UTC" firstStartedPulling="2026-03-18 09:57:36.482383352 +0000 UTC m=+172.962119500" lastFinishedPulling="2026-03-18 09:57:38.948105598 +0000 UTC m=+175.427841726" observedRunningTime="2026-03-18 09:57:39.619762107 +0000 UTC m=+176.099498225" watchObservedRunningTime="2026-03-18 09:57:39.620403112 +0000 UTC m=+176.100139240" Mar 18 09:57:40.538784 master-0 kubenswrapper[8244]: I0318 09:57:40.538721 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7c8" event={"ID":"0945a421-d7c4-46df-b3d9-507443627d51","Type":"ContainerStarted","Data":"d3364fc8d154b6ec01f276ba9c6da6cbdbb3e5ad2f355ec6a48c50edf2c9bde2"} Mar 18 09:57:40.559078 master-0 kubenswrapper[8244]: I0318 09:57:40.559010 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jl7c8" podStartSLOduration=5.120148712 podStartE2EDuration="8.558991292s" podCreationTimestamp="2026-03-18 09:57:32 +0000 UTC" firstStartedPulling="2026-03-18 09:57:36.472008828 +0000 UTC m=+172.951744966" lastFinishedPulling="2026-03-18 09:57:39.910851408 +0000 UTC m=+176.390587546" observedRunningTime="2026-03-18 09:57:40.556028969 +0000 UTC m=+177.035765097" watchObservedRunningTime="2026-03-18 09:57:40.558991292 +0000 UTC m=+177.038727420" Mar 18 09:57:41.736943 master-0 kubenswrapper[8244]: I0318 09:57:41.736803 8244 scope.go:117] "RemoveContainer" containerID="ece038fe79c27be1029079683dfa33a1fa90e9515d0fac47aae2ee51f3ffd2df" Mar 18 09:57:41.739093 master-0 kubenswrapper[8244]: I0318 09:57:41.738593 8244 scope.go:117] "RemoveContainer" containerID="2795ecc70fe66ee4a0f920912ba6641b4460a6d001aedb4e015ff801933a203d" Mar 18 09:57:42.549710 master-0 kubenswrapper[8244]: I0318 09:57:42.549664 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vj8tt_3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/kube-scheduler-operator-container/1.log" Mar 18 09:57:42.549978 master-0 kubenswrapper[8244]: I0318 09:57:42.549724 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" event={"ID":"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6","Type":"ContainerStarted","Data":"642fd807a249600061935e2bfe571679562d4d8cdf9ba7e3c80d0e780f80247e"} Mar 18 09:57:42.551891 master-0 kubenswrapper[8244]: I0318 09:57:42.551796 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-g25jq_3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/openshift-controller-manager-operator/1.log" Mar 18 09:57:42.551891 master-0 kubenswrapper[8244]: I0318 09:57:42.551860 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" event={"ID":"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4","Type":"ContainerStarted","Data":"69f2cdbc33296c63e514edbad7b73c69b46a3bfd3f3df3701dfc360a76760a09"} Mar 18 09:57:42.733214 master-0 kubenswrapper[8244]: I0318 09:57:42.733140 8244 scope.go:117] "RemoveContainer" containerID="100b826fb47409f3adda82931968130591dc6b1e7420f5ccfd2ef57c6281504c" Mar 18 09:57:42.733482 master-0 kubenswrapper[8244]: I0318 09:57:42.733266 8244 scope.go:117] "RemoveContainer" containerID="ff998e161f24e27e62ffb41d5f1af2c4149f9709b9260bb197fe3f8937665152" Mar 18 09:57:43.563434 master-0 kubenswrapper[8244]: I0318 09:57:43.563392 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/1.log" Mar 18 09:57:43.563999 master-0 kubenswrapper[8244]: I0318 09:57:43.563470 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" event={"ID":"a078565a-6970-4f42-84f4-938f1d637245","Type":"ContainerStarted","Data":"53e820dc65799d326622907d56bfabcb65416af56a015afddd831825233f23fe"} Mar 18 09:57:43.565349 master-0 kubenswrapper[8244]: I0318 09:57:43.565327 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-lk698_ec53d7fa-445b-4e1d-84ef-545f08e80ccc/kube-storage-version-migrator-operator/1.log" Mar 18 09:57:43.565396 master-0 kubenswrapper[8244]: I0318 09:57:43.565370 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" event={"ID":"ec53d7fa-445b-4e1d-84ef-545f08e80ccc","Type":"ContainerStarted","Data":"ab9a533206bf10cbc0086475add5139b53093ab44226d73893369fd1ba1ed0a0"} Mar 18 09:57:43.743084 master-0 kubenswrapper[8244]: I0318 09:57:43.743025 8244 scope.go:117] "RemoveContainer" containerID="76f59e21155c1d71669d55451f86d8b5a3fe790b476c844c6bc57c22a2e68f76" Mar 18 09:57:43.743879 master-0 kubenswrapper[8244]: I0318 09:57:43.743427 8244 scope.go:117] "RemoveContainer" containerID="b5bf205c4d2d39a65c5f434aca2db07e6f6c44b756c420c12726c015f7a4b2e6" Mar 18 09:57:43.743879 master-0 kubenswrapper[8244]: I0318 09:57:43.743553 8244 scope.go:117] "RemoveContainer" containerID="d7fed381f588321bf949c1ee4979e243946541c605dea6e2da6f26ae56dbca2b" Mar 18 09:57:43.743879 master-0 kubenswrapper[8244]: I0318 09:57:43.743607 8244 scope.go:117] "RemoveContainer" containerID="7899027579e9cd9f7fcc12484390d733833facf13d02a5193e75c23ee942e285" Mar 18 09:57:44.132453 master-0 kubenswrapper[8244]: I0318 09:57:44.132315 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:44.132453 master-0 kubenswrapper[8244]: I0318 09:57:44.132400 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:44.132453 master-0 kubenswrapper[8244]: I0318 09:57:44.132417 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:44.132453 master-0 kubenswrapper[8244]: I0318 09:57:44.132426 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:44.140363 master-0 kubenswrapper[8244]: I0318 09:57:44.140309 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:44.149176 master-0 kubenswrapper[8244]: I0318 09:57:44.149106 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:44.573835 master-0 kubenswrapper[8244]: I0318 09:57:44.573771 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-4q9tr_f076eaf0-b041-4db0-ba06-3d85e23bb654/authentication-operator/1.log" Mar 18 09:57:44.574485 master-0 kubenswrapper[8244]: I0318 09:57:44.573852 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" event={"ID":"f076eaf0-b041-4db0-ba06-3d85e23bb654","Type":"ContainerStarted","Data":"b5df01736cfc47aa85b36fd7020d93ab1a10c4989f7408f5d6725b96384201c0"} Mar 18 09:57:44.575705 master-0 kubenswrapper[8244]: I0318 09:57:44.575668 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-pgtbr_bb35841e-d992-4044-aaaa-06c9faf47bd0/service-ca-operator/1.log" Mar 18 09:57:44.575808 master-0 kubenswrapper[8244]: I0318 09:57:44.575734 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" event={"ID":"bb35841e-d992-4044-aaaa-06c9faf47bd0","Type":"ContainerStarted","Data":"d49c249df3f862614187a3b820449471cb0684b53fb2bc542b281bed1f3be2fd"} Mar 18 09:57:44.578117 master-0 kubenswrapper[8244]: I0318 09:57:44.578088 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-8srnz_9ccdc221-4ec5-487e-8ec4-85284ed628d8/network-operator/1.log" Mar 18 09:57:44.578251 master-0 kubenswrapper[8244]: I0318 09:57:44.578164 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" event={"ID":"9ccdc221-4ec5-487e-8ec4-85284ed628d8","Type":"ContainerStarted","Data":"d104795039a77eee9eb4fddfb0911cce88afaee884dd9159c6ea0d77b9f36476"} Mar 18 09:57:44.579906 master-0 kubenswrapper[8244]: I0318 09:57:44.579860 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-zz68c_0d72e695-0183-4ee8-8add-5425e67f7138/openshift-apiserver-operator/1.log" Mar 18 09:57:44.580794 master-0 kubenswrapper[8244]: I0318 09:57:44.580747 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" event={"ID":"0d72e695-0183-4ee8-8add-5425e67f7138","Type":"ContainerStarted","Data":"7d6fd2e1bc4be1b2a613ed03b0fa77f5671b8e216ea0aab842b063aa213fff8f"} Mar 18 09:57:44.586278 master-0 kubenswrapper[8244]: I0318 09:57:44.586229 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:44.587518 master-0 kubenswrapper[8244]: I0318 09:57:44.587461 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:57:44.653580 master-0 kubenswrapper[8244]: I0318 09:57:44.653538 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:44.654305 master-0 kubenswrapper[8244]: I0318 09:57:44.654281 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:44.679078 master-0 kubenswrapper[8244]: I0318 09:57:44.679009 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:44.679330 master-0 kubenswrapper[8244]: I0318 09:57:44.679288 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:44.687915 master-0 kubenswrapper[8244]: I0318 09:57:44.687862 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:44.688030 master-0 kubenswrapper[8244]: I0318 09:57:44.687975 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:44.693158 master-0 kubenswrapper[8244]: I0318 09:57:44.693010 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:44.709285 master-0 kubenswrapper[8244]: I0318 09:57:44.709204 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:44.709285 master-0 kubenswrapper[8244]: I0318 09:57:44.709276 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:44.719324 master-0 kubenswrapper[8244]: I0318 09:57:44.717202 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:44.733217 master-0 kubenswrapper[8244]: I0318 09:57:44.733193 8244 scope.go:117] "RemoveContainer" containerID="bd5fe04a9ede0b84f18ed45bdc7555eb6593622c877cdf75babe4d3ead617eed" Mar 18 09:57:44.767942 master-0 kubenswrapper[8244]: I0318 09:57:44.767893 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:45.587778 master-0 kubenswrapper[8244]: I0318 09:57:45.587735 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-pzqqc_0999f781-3299-4cb6-ba76-2a4f4584c685/kube-controller-manager-operator/1.log" Mar 18 09:57:45.588575 master-0 kubenswrapper[8244]: I0318 09:57:45.588532 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" event={"ID":"0999f781-3299-4cb6-ba76-2a4f4584c685","Type":"ContainerStarted","Data":"bdf23e456932d75fae6cdcf4a2bdaca513da90b17853bb40022bebbd243e87d8"} Mar 18 09:57:45.629890 master-0 kubenswrapper[8244]: I0318 09:57:45.628901 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 09:57:45.633857 master-0 kubenswrapper[8244]: I0318 09:57:45.631626 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nzqck" Mar 18 09:57:45.640333 master-0 kubenswrapper[8244]: I0318 09:57:45.640240 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 09:57:45.735926 master-0 kubenswrapper[8244]: I0318 09:57:45.735876 8244 scope.go:117] "RemoveContainer" containerID="81cd35f002f1f429688cbe007f6618850051907823664181496568b308ab47bb" Mar 18 09:57:45.739264 master-0 kubenswrapper[8244]: I0318 09:57:45.739174 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jl7c8" podUID="0945a421-d7c4-46df-b3d9-507443627d51" containerName="registry-server" probeResult="failure" output=< Mar 18 09:57:45.739264 master-0 kubenswrapper[8244]: timeout: failed to connect service ":50051" within 1s Mar 18 09:57:45.739264 master-0 kubenswrapper[8244]: > Mar 18 09:57:46.602572 master-0 kubenswrapper[8244]: I0318 09:57:46.602513 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-smghb_6a6a616d-012a-479e-ab3d-b21295ea1805/kube-apiserver-operator/1.log" Mar 18 09:57:46.603120 master-0 kubenswrapper[8244]: I0318 09:57:46.602651 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" event={"ID":"6a6a616d-012a-479e-ab3d-b21295ea1805","Type":"ContainerStarted","Data":"1438e5c0b41d2a2cdef9ebed19bce07d60cb299edfd66da1254cb9b0f6f74353"} Mar 18 09:57:51.983733 master-0 kubenswrapper[8244]: I0318 09:57:51.979886 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r"] Mar 18 09:57:51.983733 master-0 kubenswrapper[8244]: E0318 09:57:51.980114 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d7edd6-7975-468e-adea-138d92ed1be1" containerName="installer" Mar 18 09:57:51.983733 master-0 kubenswrapper[8244]: I0318 09:57:51.980129 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d7edd6-7975-468e-adea-138d92ed1be1" containerName="installer" Mar 18 09:57:51.983733 master-0 kubenswrapper[8244]: I0318 09:57:51.980208 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d7edd6-7975-468e-adea-138d92ed1be1" containerName="installer" Mar 18 09:57:51.983733 master-0 kubenswrapper[8244]: I0318 09:57:51.980713 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 09:57:51.983733 master-0 kubenswrapper[8244]: I0318 09:57:51.983447 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-qcchq" Mar 18 09:57:51.983733 master-0 kubenswrapper[8244]: I0318 09:57:51.983741 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 09:57:51.984526 master-0 kubenswrapper[8244]: I0318 09:57:51.983907 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 09:57:51.984526 master-0 kubenswrapper[8244]: I0318 09:57:51.984096 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 09:57:51.993852 master-0 kubenswrapper[8244]: I0318 09:57:51.988834 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759"] Mar 18 09:57:51.993852 master-0 kubenswrapper[8244]: I0318 09:57:51.989796 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:51.993852 master-0 kubenswrapper[8244]: I0318 09:57:51.993388 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-hvm64" Mar 18 09:57:51.994444 master-0 kubenswrapper[8244]: I0318 09:57:51.993876 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 09:57:51.994444 master-0 kubenswrapper[8244]: I0318 09:57:51.994202 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 09:57:51.994444 master-0 kubenswrapper[8244]: I0318 09:57:51.994404 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 09:57:51.994580 master-0 kubenswrapper[8244]: I0318 09:57:51.994544 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 09:57:52.000357 master-0 kubenswrapper[8244]: I0318 09:57:52.000307 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 09:57:52.015273 master-0 kubenswrapper[8244]: I0318 09:57:52.012893 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw"] Mar 18 09:57:52.015273 master-0 kubenswrapper[8244]: I0318 09:57:52.014082 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.018116 master-0 kubenswrapper[8244]: I0318 09:57:52.017017 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-fr2b8" Mar 18 09:57:52.018116 master-0 kubenswrapper[8244]: I0318 09:57:52.017394 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:57:52.030853 master-0 kubenswrapper[8244]: I0318 09:57:52.020845 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl"] Mar 18 09:57:52.030853 master-0 kubenswrapper[8244]: I0318 09:57:52.021204 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 09:57:52.030853 master-0 kubenswrapper[8244]: I0318 09:57:52.021573 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 09:57:52.030853 master-0 kubenswrapper[8244]: I0318 09:57:52.021899 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 09:57:52.030853 master-0 kubenswrapper[8244]: I0318 09:57:52.022103 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:57:52.030853 master-0 kubenswrapper[8244]: I0318 09:57:52.029608 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r"] Mar 18 09:57:52.030853 master-0 kubenswrapper[8244]: I0318 09:57:52.029750 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.043855 master-0 kubenswrapper[8244]: I0318 09:57:52.038435 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-vsqqr" Mar 18 09:57:52.043855 master-0 kubenswrapper[8244]: I0318 09:57:52.038549 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 09:57:52.043855 master-0 kubenswrapper[8244]: I0318 09:57:52.038711 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 09:57:52.043855 master-0 kubenswrapper[8244]: I0318 09:57:52.038799 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 09:57:52.043855 master-0 kubenswrapper[8244]: I0318 09:57:52.038796 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 09:57:52.043855 master-0 kubenswrapper[8244]: I0318 09:57:52.040624 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl"] Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.088581 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdjwg\" (UniqueName: \"kubernetes.io/projected/22c13008-d600-417e-9df1-96f3f579a11f-kube-api-access-sdjwg\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.088640 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.088666 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.088707 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmnjp\" (UniqueName: \"kubernetes.io/projected/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-kube-api-access-jmnjp\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.088920 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.089140 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.089413 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/caec44dc-aab7-4407-b34a-52bbe4b4f635-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.089577 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcsbr\" (UniqueName: \"kubernetes.io/projected/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-kube-api-access-kcsbr\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.089759 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/caec44dc-aab7-4407-b34a-52bbe4b4f635-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.089949 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-config\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.090125 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.092544 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c13008-d600-417e-9df1-96f3f579a11f-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.092691 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xml27\" (UniqueName: \"kubernetes.io/projected/caec44dc-aab7-4407-b34a-52bbe4b4f635-kube-api-access-xml27\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.102975 master-0 kubenswrapper[8244]: I0318 09:57:52.092738 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.117597 master-0 kubenswrapper[8244]: I0318 09:57:52.116594 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54"] Mar 18 09:57:52.119099 master-0 kubenswrapper[8244]: I0318 09:57:52.118408 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 09:57:52.122656 master-0 kubenswrapper[8244]: I0318 09:57:52.122639 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-t5rvh" Mar 18 09:57:52.123731 master-0 kubenswrapper[8244]: I0318 09:57:52.123659 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 09:57:52.167858 master-0 kubenswrapper[8244]: I0318 09:57:52.166421 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54"] Mar 18 09:57:52.188522 master-0 kubenswrapper[8244]: I0318 09:57:52.188285 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j"] Mar 18 09:57:52.189150 master-0 kubenswrapper[8244]: I0318 09:57:52.189119 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 09:57:52.191382 master-0 kubenswrapper[8244]: I0318 09:57:52.191348 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-bdcw7"] Mar 18 09:57:52.192920 master-0 kubenswrapper[8244]: I0318 09:57:52.192891 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.194112 master-0 kubenswrapper[8244]: I0318 09:57:52.194087 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-wv86q" Mar 18 09:57:52.194440 master-0 kubenswrapper[8244]: I0318 09:57:52.194141 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 09:57:52.194544 master-0 kubenswrapper[8244]: I0318 09:57:52.194201 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 09:57:52.194607 master-0 kubenswrapper[8244]: I0318 09:57:52.194248 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-p9m8v" Mar 18 09:57:52.194715 master-0 kubenswrapper[8244]: I0318 09:57:52.194265 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 09:57:52.196792 master-0 kubenswrapper[8244]: I0318 09:57:52.196767 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 09:57:52.197800 master-0 kubenswrapper[8244]: I0318 09:57:52.197778 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 09:57:52.197892 master-0 kubenswrapper[8244]: I0318 09:57:52.197798 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 09:57:52.198009 master-0 kubenswrapper[8244]: I0318 09:57:52.197991 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 09:57:52.198120 master-0 kubenswrapper[8244]: I0318 09:57:52.198103 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 09:57:52.198910 master-0 kubenswrapper[8244]: I0318 09:57:52.198890 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcsbr\" (UniqueName: \"kubernetes.io/projected/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-kube-api-access-kcsbr\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.199200 master-0 kubenswrapper[8244]: I0318 09:57:52.199186 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/caec44dc-aab7-4407-b34a-52bbe4b4f635-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.200004 master-0 kubenswrapper[8244]: I0318 09:57:52.199968 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-config\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.200064 master-0 kubenswrapper[8244]: I0318 09:57:52.200046 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.200104 master-0 kubenswrapper[8244]: I0318 09:57:52.200090 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c13008-d600-417e-9df1-96f3f579a11f-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.200145 master-0 kubenswrapper[8244]: I0318 09:57:52.200118 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xml27\" (UniqueName: \"kubernetes.io/projected/caec44dc-aab7-4407-b34a-52bbe4b4f635-kube-api-access-xml27\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.200179 master-0 kubenswrapper[8244]: I0318 09:57:52.200171 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29490aed-9c97-42d1-94c8-44d1de13b70c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 09:57:52.200212 master-0 kubenswrapper[8244]: I0318 09:57:52.200201 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.200257 master-0 kubenswrapper[8244]: I0318 09:57:52.200239 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdjwg\" (UniqueName: \"kubernetes.io/projected/22c13008-d600-417e-9df1-96f3f579a11f-kube-api-access-sdjwg\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.200299 master-0 kubenswrapper[8244]: I0318 09:57:52.200281 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 09:57:52.200333 master-0 kubenswrapper[8244]: I0318 09:57:52.200322 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.200379 master-0 kubenswrapper[8244]: I0318 09:57:52.200358 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmnjp\" (UniqueName: \"kubernetes.io/projected/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-kube-api-access-jmnjp\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 09:57:52.200420 master-0 kubenswrapper[8244]: I0318 09:57:52.200402 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.200456 master-0 kubenswrapper[8244]: I0318 09:57:52.200430 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257hk\" (UniqueName: \"kubernetes.io/projected/29490aed-9c97-42d1-94c8-44d1de13b70c-kube-api-access-257hk\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 09:57:52.200486 master-0 kubenswrapper[8244]: I0318 09:57:52.200462 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.200520 master-0 kubenswrapper[8244]: I0318 09:57:52.200489 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/caec44dc-aab7-4407-b34a-52bbe4b4f635-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.200655 master-0 kubenswrapper[8244]: I0318 09:57:52.200625 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-config\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.200884 master-0 kubenswrapper[8244]: I0318 09:57:52.200868 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/caec44dc-aab7-4407-b34a-52bbe4b4f635-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.200975 master-0 kubenswrapper[8244]: I0318 09:57:52.200950 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.201590 master-0 kubenswrapper[8244]: I0318 09:57:52.201558 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.201642 master-0 kubenswrapper[8244]: I0318 09:57:52.201596 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.202166 master-0 kubenswrapper[8244]: I0318 09:57:52.202130 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.202656 master-0 kubenswrapper[8244]: I0318 09:57:52.202628 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.203004 master-0 kubenswrapper[8244]: I0318 09:57:52.202979 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 09:57:52.203284 master-0 kubenswrapper[8244]: I0318 09:57:52.203256 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c13008-d600-417e-9df1-96f3f579a11f-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.211236 master-0 kubenswrapper[8244]: I0318 09:57:52.211198 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/caec44dc-aab7-4407-b34a-52bbe4b4f635-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.217967 master-0 kubenswrapper[8244]: I0318 09:57:52.216813 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt"] Mar 18 09:57:52.219013 master-0 kubenswrapper[8244]: I0318 09:57:52.218740 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.223157 master-0 kubenswrapper[8244]: I0318 09:57:52.223122 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 09:57:52.223362 master-0 kubenswrapper[8244]: I0318 09:57:52.223346 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-vxxzb" Mar 18 09:57:52.223540 master-0 kubenswrapper[8244]: I0318 09:57:52.223515 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 09:57:52.290853 master-0 kubenswrapper[8244]: I0318 09:57:52.284875 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9"] Mar 18 09:57:52.290853 master-0 kubenswrapper[8244]: I0318 09:57:52.285969 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.290853 master-0 kubenswrapper[8244]: I0318 09:57:52.286153 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l"] Mar 18 09:57:52.290853 master-0 kubenswrapper[8244]: I0318 09:57:52.287158 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.291231 master-0 kubenswrapper[8244]: I0318 09:57:52.290913 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-bdcw7"] Mar 18 09:57:52.291231 master-0 kubenswrapper[8244]: I0318 09:57:52.291163 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-g2rgj" Mar 18 09:57:52.291309 master-0 kubenswrapper[8244]: I0318 09:57:52.291249 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 09:57:52.297298 master-0 kubenswrapper[8244]: I0318 09:57:52.291480 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 09:57:52.297298 master-0 kubenswrapper[8244]: I0318 09:57:52.291503 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 09:57:52.297298 master-0 kubenswrapper[8244]: I0318 09:57:52.291513 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 09:57:52.297298 master-0 kubenswrapper[8244]: I0318 09:57:52.291791 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 09:57:52.297298 master-0 kubenswrapper[8244]: I0318 09:57:52.291977 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 09:57:52.297298 master-0 kubenswrapper[8244]: I0318 09:57:52.292174 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 09:57:52.297298 master-0 kubenswrapper[8244]: I0318 09:57:52.292339 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-8lfl6" Mar 18 09:57:52.300201 master-0 kubenswrapper[8244]: I0318 09:57:52.298559 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j"] Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303026 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt"] Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303363 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257hk\" (UniqueName: \"kubernetes.io/projected/29490aed-9c97-42d1-94c8-44d1de13b70c-kube-api-access-257hk\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303440 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/71755097-7543-48f8-8925-0e21650bf8f6-snapshots\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303499 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f5c64aa-676e-4e48-b714-02f6edb1d361-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303545 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f88c2a18-11f5-45ef-aff1-3c5976716d85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303574 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303612 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xttqt\" (UniqueName: \"kubernetes.io/projected/9f5c64aa-676e-4e48-b714-02f6edb1d361-kube-api-access-xttqt\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303680 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29490aed-9c97-42d1-94c8-44d1de13b70c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303714 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71755097-7543-48f8-8925-0e21650bf8f6-serving-cert\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303761 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scz6j\" (UniqueName: \"kubernetes.io/projected/f88c2a18-11f5-45ef-aff1-3c5976716d85-kube-api-access-scz6j\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303792 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvhfc\" (UniqueName: \"kubernetes.io/projected/71755097-7543-48f8-8925-0e21650bf8f6-kube-api-access-qvhfc\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303817 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.309847 master-0 kubenswrapper[8244]: I0318 09:57:52.303869 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.320008 master-0 kubenswrapper[8244]: I0318 09:57:52.311959 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29490aed-9c97-42d1-94c8-44d1de13b70c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 09:57:52.330206 master-0 kubenswrapper[8244]: I0318 09:57:52.325357 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmnjp\" (UniqueName: \"kubernetes.io/projected/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-kube-api-access-jmnjp\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 09:57:52.330206 master-0 kubenswrapper[8244]: I0318 09:57:52.328764 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdjwg\" (UniqueName: \"kubernetes.io/projected/22c13008-d600-417e-9df1-96f3f579a11f-kube-api-access-sdjwg\") pod \"machine-approver-6cb57bb5db-wk759\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.330609 master-0 kubenswrapper[8244]: I0318 09:57:52.330560 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9"] Mar 18 09:57:52.330609 master-0 kubenswrapper[8244]: I0318 09:57:52.330611 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l"] Mar 18 09:57:52.331174 master-0 kubenswrapper[8244]: I0318 09:57:52.331140 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcsbr\" (UniqueName: \"kubernetes.io/projected/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-kube-api-access-kcsbr\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-tntvw\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.333573 master-0 kubenswrapper[8244]: I0318 09:57:52.333535 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xml27\" (UniqueName: \"kubernetes.io/projected/caec44dc-aab7-4407-b34a-52bbe4b4f635-kube-api-access-xml27\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.367145 master-0 kubenswrapper[8244]: I0318 09:57:52.367079 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:57:52.382083 master-0 kubenswrapper[8244]: W0318 09:57:52.382047 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9eb27ff_f89f_4c0e_abac_9fdfd8cee887.slice/crio-e4e54b7e57036564c391f3dbe2a0d0cddde83c0e5f2501af4faca38ba51ff057 WatchSource:0}: Error finding container e4e54b7e57036564c391f3dbe2a0d0cddde83c0e5f2501af4faca38ba51ff057: Status 404 returned error can't find the container with id e4e54b7e57036564c391f3dbe2a0d0cddde83c0e5f2501af4faca38ba51ff057 Mar 18 09:57:52.387988 master-0 kubenswrapper[8244]: I0318 09:57:52.387945 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 09:57:52.405179 master-0 kubenswrapper[8244]: I0318 09:57:52.405146 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cert\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.405324 master-0 kubenswrapper[8244]: I0318 09:57:52.405187 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scz6j\" (UniqueName: \"kubernetes.io/projected/f88c2a18-11f5-45ef-aff1-3c5976716d85-kube-api-access-scz6j\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 09:57:52.405324 master-0 kubenswrapper[8244]: I0318 09:57:52.405210 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvhfc\" (UniqueName: \"kubernetes.io/projected/71755097-7543-48f8-8925-0e21650bf8f6-kube-api-access-qvhfc\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.405324 master-0 kubenswrapper[8244]: I0318 09:57:52.405229 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.405324 master-0 kubenswrapper[8244]: I0318 09:57:52.405249 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.405582 master-0 kubenswrapper[8244]: I0318 09:57:52.405546 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.405631 master-0 kubenswrapper[8244]: I0318 09:57:52.405606 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-config\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.405688 master-0 kubenswrapper[8244]: I0318 09:57:52.405666 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-images\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.405725 master-0 kubenswrapper[8244]: I0318 09:57:52.405692 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/71755097-7543-48f8-8925-0e21650bf8f6-snapshots\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.405755 master-0 kubenswrapper[8244]: I0318 09:57:52.405721 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmsjt\" (UniqueName: \"kubernetes.io/projected/1084562a-20a0-432d-b739-90bc0a4daff2-kube-api-access-qmsjt\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.405788 master-0 kubenswrapper[8244]: I0318 09:57:52.405769 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-config\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.405841 master-0 kubenswrapper[8244]: I0318 09:57:52.405791 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f5c64aa-676e-4e48-b714-02f6edb1d361-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.405886 master-0 kubenswrapper[8244]: I0318 09:57:52.405844 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/29fbc78b-1887-40d4-8165-f0f7cc40b583-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.405886 master-0 kubenswrapper[8244]: I0318 09:57:52.405869 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.405947 master-0 kubenswrapper[8244]: I0318 09:57:52.405886 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f88c2a18-11f5-45ef-aff1-3c5976716d85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 09:57:52.405947 master-0 kubenswrapper[8244]: I0318 09:57:52.405910 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-images\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.405947 master-0 kubenswrapper[8244]: I0318 09:57:52.405926 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xttqt\" (UniqueName: \"kubernetes.io/projected/9f5c64aa-676e-4e48-b714-02f6edb1d361-kube-api-access-xttqt\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.406033 master-0 kubenswrapper[8244]: I0318 09:57:52.405951 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2nt\" (UniqueName: \"kubernetes.io/projected/29fbc78b-1887-40d4-8165-f0f7cc40b583-kube-api-access-vm2nt\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.406033 master-0 kubenswrapper[8244]: I0318 09:57:52.405975 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71755097-7543-48f8-8925-0e21650bf8f6-serving-cert\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.406427 master-0 kubenswrapper[8244]: I0318 09:57:52.406390 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/71755097-7543-48f8-8925-0e21650bf8f6-snapshots\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.406708 master-0 kubenswrapper[8244]: I0318 09:57:52.406677 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f5c64aa-676e-4e48-b714-02f6edb1d361-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.407387 master-0 kubenswrapper[8244]: I0318 09:57:52.407342 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.408351 master-0 kubenswrapper[8244]: I0318 09:57:52.408307 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.408922 master-0 kubenswrapper[8244]: I0318 09:57:52.408895 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.409767 master-0 kubenswrapper[8244]: I0318 09:57:52.409742 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71755097-7543-48f8-8925-0e21650bf8f6-serving-cert\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.409845 master-0 kubenswrapper[8244]: I0318 09:57:52.409718 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f88c2a18-11f5-45ef-aff1-3c5976716d85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 09:57:52.459845 master-0 kubenswrapper[8244]: I0318 09:57:52.459546 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257hk\" (UniqueName: \"kubernetes.io/projected/29490aed-9c97-42d1-94c8-44d1de13b70c-kube-api-access-257hk\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 09:57:52.463170 master-0 kubenswrapper[8244]: I0318 09:57:52.460417 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t"] Mar 18 09:57:52.463170 master-0 kubenswrapper[8244]: I0318 09:57:52.461393 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.468055 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58"] Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.468889 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: W0318 09:57:52.475433 8244 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-lrdkh": failed to list *v1.Secret: secrets "machine-config-operator-dockercfg-lrdkh" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'master-0' and this object Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: E0318 09:57:52.475469 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-lrdkh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-operator-dockercfg-lrdkh\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: W0318 09:57:52.482105 8244 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: configmaps "machine-config-operator-images" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'master-0' and this object Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: E0318 09:57:52.482143 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-config-operator-images\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: W0318 09:57:52.482233 8244 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-8wt5h": failed to list *v1.Secret: secrets "olm-operator-serviceaccount-dockercfg-8wt5h" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-operator-lifecycle-manager": no relationship found between node 'master-0' and this object Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: E0318 09:57:52.482247 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-8wt5h\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"olm-operator-serviceaccount-dockercfg-8wt5h\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operator-lifecycle-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: W0318 09:57:52.492075 8244 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'master-0' and this object Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: E0318 09:57:52.492115 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: W0318 09:57:52.492158 8244 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'master-0' and this object Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: E0318 09:57:52.492168 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: W0318 09:57:52.492195 8244 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: secrets "packageserver-service-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-operator-lifecycle-manager": no relationship found between node 'master-0' and this object Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: E0318 09:57:52.492205 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"packageserver-service-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operator-lifecycle-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.492402 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.511563 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scz6j\" (UniqueName: \"kubernetes.io/projected/f88c2a18-11f5-45ef-aff1-3c5976716d85-kube-api-access-scz6j\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.517172 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xttqt\" (UniqueName: \"kubernetes.io/projected/9f5c64aa-676e-4e48-b714-02f6edb1d361-kube-api-access-xttqt\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: W0318 09:57:52.517686 8244 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'master-0' and this object Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: E0318 09:57:52.517711 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: W0318 09:57:52.517754 8244 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: secrets "mco-proxy-tls" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'master-0' and this object Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: E0318 09:57:52.517766 8244 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"mco-proxy-tls\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.518281 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.518305 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-config\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.518332 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-images\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.518354 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmsjt\" (UniqueName: \"kubernetes.io/projected/1084562a-20a0-432d-b739-90bc0a4daff2-kube-api-access-qmsjt\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.518371 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-config\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.518386 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/29fbc78b-1887-40d4-8165-f0f7cc40b583-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.518409 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-images\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.518432 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2nt\" (UniqueName: \"kubernetes.io/projected/29fbc78b-1887-40d4-8165-f0f7cc40b583-kube-api-access-vm2nt\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.518455 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cert\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.523604 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-images\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.524281 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-config\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.525292 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-config\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.530076 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t"] Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.531089 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/29fbc78b-1887-40d4-8165-f0f7cc40b583-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.531357 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-images\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.535447 master-0 kubenswrapper[8244]: I0318 09:57:52.531991 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 09:57:52.540968 master-0 kubenswrapper[8244]: I0318 09:57:52.537989 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.558467 master-0 kubenswrapper[8244]: I0318 09:57:52.549363 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cert\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.558467 master-0 kubenswrapper[8244]: I0318 09:57:52.556145 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58"] Mar 18 09:57:52.572533 master-0 kubenswrapper[8244]: I0318 09:57:52.571213 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 09:57:52.601463 master-0 kubenswrapper[8244]: I0318 09:57:52.598605 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmsjt\" (UniqueName: \"kubernetes.io/projected/1084562a-20a0-432d-b739-90bc0a4daff2-kube-api-access-qmsjt\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.609451 master-0 kubenswrapper[8244]: I0318 09:57:52.609268 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 09:57:52.618983 master-0 kubenswrapper[8244]: I0318 09:57:52.615080 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:57:52.618983 master-0 kubenswrapper[8244]: I0318 09:57:52.615993 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2nt\" (UniqueName: \"kubernetes.io/projected/29fbc78b-1887-40d4-8165-f0f7cc40b583-kube-api-access-vm2nt\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.623944 master-0 kubenswrapper[8244]: I0318 09:57:52.623439 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:52.623944 master-0 kubenswrapper[8244]: I0318 09:57:52.623530 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-images\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:52.623944 master-0 kubenswrapper[8244]: I0318 09:57:52.623561 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:52.623944 master-0 kubenswrapper[8244]: I0318 09:57:52.623594 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bql7p\" (UniqueName: \"kubernetes.io/projected/bdf80ddc-7c99-4f60-814b-ba98809ef41d-kube-api-access-bql7p\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.623944 master-0 kubenswrapper[8244]: I0318 09:57:52.623617 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4btrk\" (UniqueName: \"kubernetes.io/projected/2d014721-ed53-447a-b737-c496bbba18be-kube-api-access-4btrk\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:52.623944 master-0 kubenswrapper[8244]: I0318 09:57:52.623639 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-webhook-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.623944 master-0 kubenswrapper[8244]: I0318 09:57:52.623672 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-apiservice-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.623944 master-0 kubenswrapper[8244]: I0318 09:57:52.623713 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bdf80ddc-7c99-4f60-814b-ba98809ef41d-tmpfs\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.640217 master-0 kubenswrapper[8244]: I0318 09:57:52.639385 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvhfc\" (UniqueName: \"kubernetes.io/projected/71755097-7543-48f8-8925-0e21650bf8f6-kube-api-access-qvhfc\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.673954 master-0 kubenswrapper[8244]: I0318 09:57:52.673456 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 09:57:52.699851 master-0 kubenswrapper[8244]: I0318 09:57:52.699068 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" event={"ID":"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887","Type":"ContainerStarted","Data":"e4e54b7e57036564c391f3dbe2a0d0cddde83c0e5f2501af4faca38ba51ff057"} Mar 18 09:57:52.710748 master-0 kubenswrapper[8244]: I0318 09:57:52.707550 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 09:57:52.724858 master-0 kubenswrapper[8244]: I0318 09:57:52.724587 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-images\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:52.724858 master-0 kubenswrapper[8244]: I0318 09:57:52.724643 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:52.724858 master-0 kubenswrapper[8244]: I0318 09:57:52.724768 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bql7p\" (UniqueName: \"kubernetes.io/projected/bdf80ddc-7c99-4f60-814b-ba98809ef41d-kube-api-access-bql7p\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.724858 master-0 kubenswrapper[8244]: I0318 09:57:52.724854 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btrk\" (UniqueName: \"kubernetes.io/projected/2d014721-ed53-447a-b737-c496bbba18be-kube-api-access-4btrk\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:52.725194 master-0 kubenswrapper[8244]: I0318 09:57:52.724890 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-webhook-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.725194 master-0 kubenswrapper[8244]: I0318 09:57:52.724959 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-apiservice-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.725194 master-0 kubenswrapper[8244]: I0318 09:57:52.725049 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bdf80ddc-7c99-4f60-814b-ba98809ef41d-tmpfs\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.725194 master-0 kubenswrapper[8244]: I0318 09:57:52.725120 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:52.726053 master-0 kubenswrapper[8244]: I0318 09:57:52.726018 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bdf80ddc-7c99-4f60-814b-ba98809ef41d-tmpfs\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.789259 master-0 kubenswrapper[8244]: I0318 09:57:52.788979 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bql7p\" (UniqueName: \"kubernetes.io/projected/bdf80ddc-7c99-4f60-814b-ba98809ef41d-kube-api-access-bql7p\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:52.813241 master-0 kubenswrapper[8244]: I0318 09:57:52.809962 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl"] Mar 18 09:57:52.870069 master-0 kubenswrapper[8244]: I0318 09:57:52.869906 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 09:57:52.908489 master-0 kubenswrapper[8244]: W0318 09:57:52.908456 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaec44dc_aab7_4407_b34a_52bbe4b4f635.slice/crio-a4b6c9bb5e1aa6ddb46f2ece42f31a363d888ffb22d8e2d50941005d7a91173e WatchSource:0}: Error finding container a4b6c9bb5e1aa6ddb46f2ece42f31a363d888ffb22d8e2d50941005d7a91173e: Status 404 returned error can't find the container with id a4b6c9bb5e1aa6ddb46f2ece42f31a363d888ffb22d8e2d50941005d7a91173e Mar 18 09:57:53.016944 master-0 kubenswrapper[8244]: I0318 09:57:53.010260 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j"] Mar 18 09:57:53.149894 master-0 kubenswrapper[8244]: I0318 09:57:53.148576 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54"] Mar 18 09:57:53.160498 master-0 kubenswrapper[8244]: W0318 09:57:53.160442 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29490aed_9c97_42d1_94c8_44d1de13b70c.slice/crio-9ecbe775d85b5008c6adeeb8170b86d61ae88bf900fcd70723b66300a47bcaec WatchSource:0}: Error finding container 9ecbe775d85b5008c6adeeb8170b86d61ae88bf900fcd70723b66300a47bcaec: Status 404 returned error can't find the container with id 9ecbe775d85b5008c6adeeb8170b86d61ae88bf900fcd70723b66300a47bcaec Mar 18 09:57:53.349249 master-0 kubenswrapper[8244]: I0318 09:57:53.349205 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-lrdkh" Mar 18 09:57:53.419549 master-0 kubenswrapper[8244]: I0318 09:57:53.419503 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt"] Mar 18 09:57:53.434897 master-0 kubenswrapper[8244]: W0318 09:57:53.434812 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f5c64aa_676e_4e48_b714_02f6edb1d361.slice/crio-da42cce599588e6c99d4cd2839a25bf8a6c6ba9dc794e5b75cfaceda627f492b WatchSource:0}: Error finding container da42cce599588e6c99d4cd2839a25bf8a6c6ba9dc794e5b75cfaceda627f492b: Status 404 returned error can't find the container with id da42cce599588e6c99d4cd2839a25bf8a6c6ba9dc794e5b75cfaceda627f492b Mar 18 09:57:53.480207 master-0 kubenswrapper[8244]: I0318 09:57:53.480041 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l"] Mar 18 09:57:53.486441 master-0 kubenswrapper[8244]: I0318 09:57:53.486114 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9"] Mar 18 09:57:53.489269 master-0 kubenswrapper[8244]: W0318 09:57:53.489218 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1084562a_20a0_432d_b739_90bc0a4daff2.slice/crio-e702a6208830f572cc3b5f2ed7735679946a02e12d549d40a5020b7820cc5f46 WatchSource:0}: Error finding container e702a6208830f572cc3b5f2ed7735679946a02e12d549d40a5020b7820cc5f46: Status 404 returned error can't find the container with id e702a6208830f572cc3b5f2ed7735679946a02e12d549d40a5020b7820cc5f46 Mar 18 09:57:53.516313 master-0 kubenswrapper[8244]: I0318 09:57:53.516272 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 09:57:53.529844 master-0 kubenswrapper[8244]: I0318 09:57:53.529787 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-apiservice-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:53.532617 master-0 kubenswrapper[8244]: I0318 09:57:53.532584 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-webhook-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:53.611513 master-0 kubenswrapper[8244]: I0318 09:57:53.610129 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r"] Mar 18 09:57:53.616444 master-0 kubenswrapper[8244]: I0318 09:57:53.616401 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-bdcw7"] Mar 18 09:57:53.683953 master-0 kubenswrapper[8244]: I0318 09:57:53.683426 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 09:57:53.688945 master-0 kubenswrapper[8244]: I0318 09:57:53.688684 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-images\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:53.709689 master-0 kubenswrapper[8244]: I0318 09:57:53.709641 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" event={"ID":"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850","Type":"ContainerStarted","Data":"481a20c56b1513a6550470d25ece05987dc0ad3be0f23f19f26b6d5a7a36ce42"} Mar 18 09:57:53.710711 master-0 kubenswrapper[8244]: I0318 09:57:53.710678 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" event={"ID":"71755097-7543-48f8-8925-0e21650bf8f6","Type":"ContainerStarted","Data":"d7d862ef1259d0f32a24b080a794c178935b4f82b34bd652442b355adbe27b4c"} Mar 18 09:57:53.711491 master-0 kubenswrapper[8244]: I0318 09:57:53.711456 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" event={"ID":"f88c2a18-11f5-45ef-aff1-3c5976716d85","Type":"ContainerStarted","Data":"a62338b3d8b6fefea0ba1a5636a4c5079225838e71c631e7514905926d40be01"} Mar 18 09:57:53.713864 master-0 kubenswrapper[8244]: I0318 09:57:53.713235 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" event={"ID":"22c13008-d600-417e-9df1-96f3f579a11f","Type":"ContainerStarted","Data":"bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b"} Mar 18 09:57:53.713864 master-0 kubenswrapper[8244]: I0318 09:57:53.713266 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" event={"ID":"22c13008-d600-417e-9df1-96f3f579a11f","Type":"ContainerStarted","Data":"2149c630333ae9ebbeba145d1b4c7914481957cb46004d4b6849e674c4e85846"} Mar 18 09:57:53.714914 master-0 kubenswrapper[8244]: I0318 09:57:53.714785 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" event={"ID":"caec44dc-aab7-4407-b34a-52bbe4b4f635","Type":"ContainerStarted","Data":"0fecfec884a623bc6c074e00c0f2b4a851ff122282b80623634660a5991d6c3a"} Mar 18 09:57:53.714914 master-0 kubenswrapper[8244]: I0318 09:57:53.714816 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" event={"ID":"caec44dc-aab7-4407-b34a-52bbe4b4f635","Type":"ContainerStarted","Data":"a4b6c9bb5e1aa6ddb46f2ece42f31a363d888ffb22d8e2d50941005d7a91173e"} Mar 18 09:57:53.716657 master-0 kubenswrapper[8244]: I0318 09:57:53.716455 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" event={"ID":"1084562a-20a0-432d-b739-90bc0a4daff2","Type":"ContainerStarted","Data":"e702a6208830f572cc3b5f2ed7735679946a02e12d549d40a5020b7820cc5f46"} Mar 18 09:57:53.719949 master-0 kubenswrapper[8244]: I0318 09:57:53.719908 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" event={"ID":"29fbc78b-1887-40d4-8165-f0f7cc40b583","Type":"ContainerStarted","Data":"827b305b7796713c9fc2a15242fa8ece00001ca520a1d2b670a60fef282493fc"} Mar 18 09:57:53.719949 master-0 kubenswrapper[8244]: I0318 09:57:53.719947 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" event={"ID":"29fbc78b-1887-40d4-8165-f0f7cc40b583","Type":"ContainerStarted","Data":"dc23eb8c4f8df6172dfca6b7df2e710cff8ef0d5f4a2b6bc29af4b8dd83114fe"} Mar 18 09:57:53.721657 master-0 kubenswrapper[8244]: I0318 09:57:53.721617 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" event={"ID":"9f5c64aa-676e-4e48-b714-02f6edb1d361","Type":"ContainerStarted","Data":"af2cbf43c52e6ce6cf149bd1ad8e93d0502140c81b27ffaf0b8a3cb745d3c6b3"} Mar 18 09:57:53.721657 master-0 kubenswrapper[8244]: I0318 09:57:53.721653 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" event={"ID":"9f5c64aa-676e-4e48-b714-02f6edb1d361","Type":"ContainerStarted","Data":"da42cce599588e6c99d4cd2839a25bf8a6c6ba9dc794e5b75cfaceda627f492b"} Mar 18 09:57:53.722854 master-0 kubenswrapper[8244]: I0318 09:57:53.722782 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" event={"ID":"29490aed-9c97-42d1-94c8-44d1de13b70c","Type":"ContainerStarted","Data":"9ecbe775d85b5008c6adeeb8170b86d61ae88bf900fcd70723b66300a47bcaec"} Mar 18 09:57:53.725691 master-0 kubenswrapper[8244]: E0318 09:57:53.725659 8244 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:57:53.725770 master-0 kubenswrapper[8244]: E0318 09:57:53.725698 8244 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:57:53.725770 master-0 kubenswrapper[8244]: E0318 09:57:53.725735 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-auth-proxy-config podName:2d014721-ed53-447a-b737-c496bbba18be nodeName:}" failed. No retries permitted until 2026-03-18 09:57:54.225715129 +0000 UTC m=+190.705451257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-auth-proxy-config") pod "machine-config-operator-84d549f6d5-gnl5t" (UID: "2d014721-ed53-447a-b737-c496bbba18be") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:57:53.725891 master-0 kubenswrapper[8244]: E0318 09:57:53.725778 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls podName:2d014721-ed53-447a-b737-c496bbba18be nodeName:}" failed. No retries permitted until 2026-03-18 09:57:54.2257546 +0000 UTC m=+190.705490718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls") pod "machine-config-operator-84d549f6d5-gnl5t" (UID: "2d014721-ed53-447a-b737-c496bbba18be") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:57:53.771377 master-0 kubenswrapper[8244]: I0318 09:57:53.771323 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 09:57:53.773191 master-0 kubenswrapper[8244]: E0318 09:57:53.773164 8244 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:57:53.819707 master-0 kubenswrapper[8244]: I0318 09:57:53.819570 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 09:57:53.832067 master-0 kubenswrapper[8244]: I0318 09:57:53.832004 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 09:57:53.835172 master-0 kubenswrapper[8244]: E0318 09:57:53.835131 8244 projected.go:194] Error preparing data for projected volume kube-api-access-4btrk for pod openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:57:53.835302 master-0 kubenswrapper[8244]: E0318 09:57:53.835231 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d014721-ed53-447a-b737-c496bbba18be-kube-api-access-4btrk podName:2d014721-ed53-447a-b737-c496bbba18be nodeName:}" failed. No retries permitted until 2026-03-18 09:57:54.335208794 +0000 UTC m=+190.814944932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4btrk" (UniqueName: "kubernetes.io/projected/2d014721-ed53-447a-b737-c496bbba18be-kube-api-access-4btrk") pod "machine-config-operator-84d549f6d5-gnl5t" (UID: "2d014721-ed53-447a-b737-c496bbba18be") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:57:53.881717 master-0 kubenswrapper[8244]: I0318 09:57:53.881655 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 09:57:53.929189 master-0 kubenswrapper[8244]: I0318 09:57:53.928724 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-8wt5h" Mar 18 09:57:53.937034 master-0 kubenswrapper[8244]: I0318 09:57:53.936968 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:54.266111 master-0 kubenswrapper[8244]: I0318 09:57:54.266020 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:54.266111 master-0 kubenswrapper[8244]: I0318 09:57:54.266123 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:54.266937 master-0 kubenswrapper[8244]: I0318 09:57:54.266893 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:54.269088 master-0 kubenswrapper[8244]: I0318 09:57:54.269043 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:54.368479 master-0 kubenswrapper[8244]: I0318 09:57:54.368286 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btrk\" (UniqueName: \"kubernetes.io/projected/2d014721-ed53-447a-b737-c496bbba18be-kube-api-access-4btrk\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:54.371337 master-0 kubenswrapper[8244]: I0318 09:57:54.371274 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btrk\" (UniqueName: \"kubernetes.io/projected/2d014721-ed53-447a-b737-c496bbba18be-kube-api-access-4btrk\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:54.403413 master-0 kubenswrapper[8244]: I0318 09:57:54.403336 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 09:57:54.736045 master-0 kubenswrapper[8244]: I0318 09:57:54.735097 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:54.772270 master-0 kubenswrapper[8244]: I0318 09:57:54.772221 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 09:57:55.854228 master-0 kubenswrapper[8244]: I0318 09:57:55.854157 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58"] Mar 18 09:57:55.856718 master-0 kubenswrapper[8244]: I0318 09:57:55.856671 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t"] Mar 18 09:57:55.869636 master-0 kubenswrapper[8244]: W0318 09:57:55.869570 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d014721_ed53_447a_b737_c496bbba18be.slice/crio-99f1238675e89d202ac72814030597ebf2c78d75d8dce9d24566f86cd13b327c WatchSource:0}: Error finding container 99f1238675e89d202ac72814030597ebf2c78d75d8dce9d24566f86cd13b327c: Status 404 returned error can't find the container with id 99f1238675e89d202ac72814030597ebf2c78d75d8dce9d24566f86cd13b327c Mar 18 09:57:55.871684 master-0 kubenswrapper[8244]: W0318 09:57:55.871647 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdf80ddc_7c99_4f60_814b_ba98809ef41d.slice/crio-7483df25713a00b0ea8cbc4c6314a73f83bff54b160af6b49103c48fec6f8b1e WatchSource:0}: Error finding container 7483df25713a00b0ea8cbc4c6314a73f83bff54b160af6b49103c48fec6f8b1e: Status 404 returned error can't find the container with id 7483df25713a00b0ea8cbc4c6314a73f83bff54b160af6b49103c48fec6f8b1e Mar 18 09:57:56.755721 master-0 kubenswrapper[8244]: I0318 09:57:56.755497 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" event={"ID":"bdf80ddc-7c99-4f60-814b-ba98809ef41d","Type":"ContainerStarted","Data":"b400dad6213948c734c29644b5be08e506a95dbd4d523260eadf3db84e639f92"} Mar 18 09:57:56.755721 master-0 kubenswrapper[8244]: I0318 09:57:56.755542 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" event={"ID":"bdf80ddc-7c99-4f60-814b-ba98809ef41d","Type":"ContainerStarted","Data":"7483df25713a00b0ea8cbc4c6314a73f83bff54b160af6b49103c48fec6f8b1e"} Mar 18 09:57:56.756702 master-0 kubenswrapper[8244]: I0318 09:57:56.756666 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:56.760080 master-0 kubenswrapper[8244]: I0318 09:57:56.760049 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" event={"ID":"2d014721-ed53-447a-b737-c496bbba18be","Type":"ContainerStarted","Data":"09180a6a9fee68a97b5503198f4ae1ab6d84235d2b7270501ebf779151b55941"} Mar 18 09:57:56.760080 master-0 kubenswrapper[8244]: I0318 09:57:56.760081 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" event={"ID":"2d014721-ed53-447a-b737-c496bbba18be","Type":"ContainerStarted","Data":"99f1238675e89d202ac72814030597ebf2c78d75d8dce9d24566f86cd13b327c"} Mar 18 09:57:56.778084 master-0 kubenswrapper[8244]: I0318 09:57:56.777369 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 09:57:56.793571 master-0 kubenswrapper[8244]: I0318 09:57:56.793423 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" podStartSLOduration=4.793401831 podStartE2EDuration="4.793401831s" podCreationTimestamp="2026-03-18 09:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:57:56.784451953 +0000 UTC m=+193.264188081" watchObservedRunningTime="2026-03-18 09:57:56.793401831 +0000 UTC m=+193.273137959" Mar 18 09:57:58.784958 master-0 kubenswrapper[8244]: I0318 09:57:58.784870 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" event={"ID":"2d014721-ed53-447a-b737-c496bbba18be","Type":"ContainerStarted","Data":"e8991b328f1cf47c089945cd100fe341debc1934661e14473f696f8de9edc3fc"} Mar 18 09:57:59.174416 master-0 kubenswrapper[8244]: I0318 09:57:59.174196 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" podStartSLOduration=7.174170363 podStartE2EDuration="7.174170363s" podCreationTimestamp="2026-03-18 09:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:57:59.16709686 +0000 UTC m=+195.646832978" watchObservedRunningTime="2026-03-18 09:57:59.174170363 +0000 UTC m=+195.653906501" Mar 18 09:57:59.314107 master-0 kubenswrapper[8244]: I0318 09:57:59.312657 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mtdk2"] Mar 18 09:57:59.314107 master-0 kubenswrapper[8244]: I0318 09:57:59.313673 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.315536 master-0 kubenswrapper[8244]: I0318 09:57:59.315517 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-glndn" Mar 18 09:57:59.315725 master-0 kubenswrapper[8244]: I0318 09:57:59.315567 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 09:57:59.445264 master-0 kubenswrapper[8244]: I0318 09:57:59.443996 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5e0836f-c0b4-40cd-9f63-55774da2740e-mcd-auth-proxy-config\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.445264 master-0 kubenswrapper[8244]: I0318 09:57:59.444059 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e5e0836f-c0b4-40cd-9f63-55774da2740e-rootfs\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.445264 master-0 kubenswrapper[8244]: I0318 09:57:59.444173 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5e0836f-c0b4-40cd-9f63-55774da2740e-proxy-tls\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.445264 master-0 kubenswrapper[8244]: I0318 09:57:59.444402 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94j4\" (UniqueName: \"kubernetes.io/projected/e5e0836f-c0b4-40cd-9f63-55774da2740e-kube-api-access-k94j4\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.545929 master-0 kubenswrapper[8244]: I0318 09:57:59.545779 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5e0836f-c0b4-40cd-9f63-55774da2740e-proxy-tls\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.545929 master-0 kubenswrapper[8244]: I0318 09:57:59.545889 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94j4\" (UniqueName: \"kubernetes.io/projected/e5e0836f-c0b4-40cd-9f63-55774da2740e-kube-api-access-k94j4\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.546145 master-0 kubenswrapper[8244]: I0318 09:57:59.546104 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5e0836f-c0b4-40cd-9f63-55774da2740e-mcd-auth-proxy-config\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.546244 master-0 kubenswrapper[8244]: I0318 09:57:59.546223 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e5e0836f-c0b4-40cd-9f63-55774da2740e-rootfs\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.546351 master-0 kubenswrapper[8244]: I0318 09:57:59.546331 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e5e0836f-c0b4-40cd-9f63-55774da2740e-rootfs\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.547009 master-0 kubenswrapper[8244]: I0318 09:57:59.546984 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5e0836f-c0b4-40cd-9f63-55774da2740e-mcd-auth-proxy-config\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.558154 master-0 kubenswrapper[8244]: I0318 09:57:59.558112 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5e0836f-c0b4-40cd-9f63-55774da2740e-proxy-tls\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.560728 master-0 kubenswrapper[8244]: I0318 09:57:59.560667 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94j4\" (UniqueName: \"kubernetes.io/projected/e5e0836f-c0b4-40cd-9f63-55774da2740e-kube-api-access-k94j4\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:57:59.638189 master-0 kubenswrapper[8244]: I0318 09:57:59.637003 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 09:58:11.425167 master-0 kubenswrapper[8244]: I0318 09:58:11.425099 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759"] Mar 18 09:58:13.609365 master-0 kubenswrapper[8244]: I0318 09:58:13.609230 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw"] Mar 18 09:58:17.731509 master-0 kubenswrapper[8244]: W0318 09:58:17.731415 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5e0836f_c0b4_40cd_9f63_55774da2740e.slice/crio-e277fb0b84dd045eb44f5a8337ca7f75f6577ad5f14ee5eacb1c176f0cf83dfa WatchSource:0}: Error finding container e277fb0b84dd045eb44f5a8337ca7f75f6577ad5f14ee5eacb1c176f0cf83dfa: Status 404 returned error can't find the container with id e277fb0b84dd045eb44f5a8337ca7f75f6577ad5f14ee5eacb1c176f0cf83dfa Mar 18 09:58:17.915483 master-0 kubenswrapper[8244]: I0318 09:58:17.915444 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" event={"ID":"e5e0836f-c0b4-40cd-9f63-55774da2740e","Type":"ContainerStarted","Data":"e277fb0b84dd045eb44f5a8337ca7f75f6577ad5f14ee5eacb1c176f0cf83dfa"} Mar 18 09:58:18.922494 master-0 kubenswrapper[8244]: I0318 09:58:18.922422 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" event={"ID":"29490aed-9c97-42d1-94c8-44d1de13b70c","Type":"ContainerStarted","Data":"7dacdb62f1945b9bcbdc5ee51170fb7ad65d9a415432a7a5c1a8a53dc9179ca2"} Mar 18 09:58:18.924049 master-0 kubenswrapper[8244]: I0318 09:58:18.924011 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" event={"ID":"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850","Type":"ContainerStarted","Data":"ece674d79d5fd6afc063230f8e65cab73059c315a4619ada7661e8e8ae4d01ca"} Mar 18 09:58:18.924113 master-0 kubenswrapper[8244]: I0318 09:58:18.924054 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" event={"ID":"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850","Type":"ContainerStarted","Data":"09ea98c7905e4dec3ae9833b94fbb167f42862f729b2772ab6c15bea8d7add2e"} Mar 18 09:58:18.926457 master-0 kubenswrapper[8244]: I0318 09:58:18.926424 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" event={"ID":"1084562a-20a0-432d-b739-90bc0a4daff2","Type":"ContainerStarted","Data":"87003996a5718c5bc6e95603e8eded3d44da0056385b03f21c2c9944416268da"} Mar 18 09:58:18.926535 master-0 kubenswrapper[8244]: I0318 09:58:18.926458 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" event={"ID":"1084562a-20a0-432d-b739-90bc0a4daff2","Type":"ContainerStarted","Data":"1ecb36ab1ea5528a80738edf9a38359cd4af84dcf07cd0edebf601529c05c59e"} Mar 18 09:58:18.928075 master-0 kubenswrapper[8244]: I0318 09:58:18.928048 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" event={"ID":"f88c2a18-11f5-45ef-aff1-3c5976716d85","Type":"ContainerStarted","Data":"d77d62684d3696a69a4baad8521b7beec7ec234f5d636741ff18bfd6906b5683"} Mar 18 09:58:18.929782 master-0 kubenswrapper[8244]: I0318 09:58:18.929762 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" event={"ID":"22c13008-d600-417e-9df1-96f3f579a11f","Type":"ContainerStarted","Data":"6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8"} Mar 18 09:58:18.929905 master-0 kubenswrapper[8244]: I0318 09:58:18.929878 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" podUID="22c13008-d600-417e-9df1-96f3f579a11f" containerName="kube-rbac-proxy" containerID="cri-o://bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b" gracePeriod=30 Mar 18 09:58:18.930119 master-0 kubenswrapper[8244]: I0318 09:58:18.930099 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" podUID="22c13008-d600-417e-9df1-96f3f579a11f" containerName="machine-approver-controller" containerID="cri-o://6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8" gracePeriod=30 Mar 18 09:58:18.932686 master-0 kubenswrapper[8244]: I0318 09:58:18.932654 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" event={"ID":"29fbc78b-1887-40d4-8165-f0f7cc40b583","Type":"ContainerStarted","Data":"8bc81d8dfdc71ea2b5b45a9af5008e6292938bf340e41102f31bdd98b3d93eaa"} Mar 18 09:58:18.936626 master-0 kubenswrapper[8244]: I0318 09:58:18.936587 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" event={"ID":"9f5c64aa-676e-4e48-b714-02f6edb1d361","Type":"ContainerStarted","Data":"6655987065a30c5bbf651bf96600d36185c30b2a671ea89757e4e505e5002a5d"} Mar 18 09:58:18.939924 master-0 kubenswrapper[8244]: I0318 09:58:18.939873 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" event={"ID":"caec44dc-aab7-4407-b34a-52bbe4b4f635","Type":"ContainerStarted","Data":"f2cddb12c3c75d46ba0029002456d542ded1c084b001bdc018f6a7391d1a9766"} Mar 18 09:58:18.942570 master-0 kubenswrapper[8244]: I0318 09:58:18.942518 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" event={"ID":"e5e0836f-c0b4-40cd-9f63-55774da2740e","Type":"ContainerStarted","Data":"0d748ddd9f34930cec38b7370ab43f69f22a6481e37d065e4eff74d697a94db8"} Mar 18 09:58:18.942570 master-0 kubenswrapper[8244]: I0318 09:58:18.942568 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" event={"ID":"e5e0836f-c0b4-40cd-9f63-55774da2740e","Type":"ContainerStarted","Data":"f3d8ae5941fba8df4dc2881128a3ad994f020b41664a48c350d437efcddf7768"} Mar 18 09:58:18.945100 master-0 kubenswrapper[8244]: I0318 09:58:18.945074 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" event={"ID":"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887","Type":"ContainerStarted","Data":"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61"} Mar 18 09:58:18.945179 master-0 kubenswrapper[8244]: I0318 09:58:18.945103 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" event={"ID":"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887","Type":"ContainerStarted","Data":"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402"} Mar 18 09:58:18.945179 master-0 kubenswrapper[8244]: I0318 09:58:18.945117 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" event={"ID":"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887","Type":"ContainerStarted","Data":"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04"} Mar 18 09:58:18.945265 master-0 kubenswrapper[8244]: I0318 09:58:18.945217 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="cluster-cloud-controller-manager" containerID="cri-o://83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04" gracePeriod=30 Mar 18 09:58:18.945478 master-0 kubenswrapper[8244]: I0318 09:58:18.945446 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="kube-rbac-proxy" containerID="cri-o://1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61" gracePeriod=30 Mar 18 09:58:18.945530 master-0 kubenswrapper[8244]: I0318 09:58:18.945504 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="config-sync-controllers" containerID="cri-o://e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402" gracePeriod=30 Mar 18 09:58:18.948529 master-0 kubenswrapper[8244]: I0318 09:58:18.948490 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" event={"ID":"71755097-7543-48f8-8925-0e21650bf8f6","Type":"ContainerStarted","Data":"220ff8430972cd71ac3e3a30eb6620b129ab7e67e7e9c9f83b73380cc41bc1ca"} Mar 18 09:58:18.959569 master-0 kubenswrapper[8244]: I0318 09:58:18.959507 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" podStartSLOduration=2.46504432 podStartE2EDuration="26.95949423s" podCreationTimestamp="2026-03-18 09:57:52 +0000 UTC" firstStartedPulling="2026-03-18 09:57:53.166009845 +0000 UTC m=+189.645745963" lastFinishedPulling="2026-03-18 09:58:17.660459745 +0000 UTC m=+214.140195873" observedRunningTime="2026-03-18 09:58:18.959132431 +0000 UTC m=+215.438868569" watchObservedRunningTime="2026-03-18 09:58:18.95949423 +0000 UTC m=+215.439230358" Mar 18 09:58:19.028716 master-0 kubenswrapper[8244]: I0318 09:58:19.028208 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" podStartSLOduration=20.028164357 podStartE2EDuration="20.028164357s" podCreationTimestamp="2026-03-18 09:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:58:19.026021825 +0000 UTC m=+215.505757953" watchObservedRunningTime="2026-03-18 09:58:19.028164357 +0000 UTC m=+215.507900485" Mar 18 09:58:19.061912 master-0 kubenswrapper[8244]: I0318 09:58:19.061859 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" podStartSLOduration=2.95908355 podStartE2EDuration="27.06181809s" podCreationTimestamp="2026-03-18 09:57:52 +0000 UTC" firstStartedPulling="2026-03-18 09:57:53.628384391 +0000 UTC m=+190.108120519" lastFinishedPulling="2026-03-18 09:58:17.731118921 +0000 UTC m=+214.210855059" observedRunningTime="2026-03-18 09:58:19.061662096 +0000 UTC m=+215.541398224" watchObservedRunningTime="2026-03-18 09:58:19.06181809 +0000 UTC m=+215.541554218" Mar 18 09:58:19.092408 master-0 kubenswrapper[8244]: I0318 09:58:19.092339 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" podStartSLOduration=2.779010101 podStartE2EDuration="28.092321835s" podCreationTimestamp="2026-03-18 09:57:51 +0000 UTC" firstStartedPulling="2026-03-18 09:57:52.384438162 +0000 UTC m=+188.864174280" lastFinishedPulling="2026-03-18 09:58:17.697749886 +0000 UTC m=+214.177486014" observedRunningTime="2026-03-18 09:58:19.09008293 +0000 UTC m=+215.569819058" watchObservedRunningTime="2026-03-18 09:58:19.092321835 +0000 UTC m=+215.572057953" Mar 18 09:58:19.141891 master-0 kubenswrapper[8244]: I0318 09:58:19.140895 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" podStartSLOduration=18.543053239 podStartE2EDuration="28.140874101s" podCreationTimestamp="2026-03-18 09:57:51 +0000 UTC" firstStartedPulling="2026-03-18 09:57:53.268867308 +0000 UTC m=+189.748603436" lastFinishedPulling="2026-03-18 09:58:02.86668817 +0000 UTC m=+199.346424298" observedRunningTime="2026-03-18 09:58:19.140277877 +0000 UTC m=+215.620014015" watchObservedRunningTime="2026-03-18 09:58:19.140874101 +0000 UTC m=+215.620610239" Mar 18 09:58:19.143230 master-0 kubenswrapper[8244]: I0318 09:58:19.143145 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" podStartSLOduration=3.479836772 podStartE2EDuration="28.143119586s" podCreationTimestamp="2026-03-18 09:57:51 +0000 UTC" firstStartedPulling="2026-03-18 09:57:53.107627129 +0000 UTC m=+189.587363257" lastFinishedPulling="2026-03-18 09:58:17.770909943 +0000 UTC m=+214.250646071" observedRunningTime="2026-03-18 09:58:19.115953302 +0000 UTC m=+215.595689430" watchObservedRunningTime="2026-03-18 09:58:19.143119586 +0000 UTC m=+215.622855804" Mar 18 09:58:19.151107 master-0 kubenswrapper[8244]: I0318 09:58:19.151066 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:58:19.155397 master-0 kubenswrapper[8244]: I0318 09:58:19.155327 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" podStartSLOduration=17.362380196 podStartE2EDuration="27.155312184s" podCreationTimestamp="2026-03-18 09:57:52 +0000 UTC" firstStartedPulling="2026-03-18 09:57:53.073753872 +0000 UTC m=+189.553490000" lastFinishedPulling="2026-03-18 09:58:02.86668586 +0000 UTC m=+199.346421988" observedRunningTime="2026-03-18 09:58:19.155079848 +0000 UTC m=+215.634815976" watchObservedRunningTime="2026-03-18 09:58:19.155312184 +0000 UTC m=+215.635048322" Mar 18 09:58:19.162242 master-0 kubenswrapper[8244]: I0318 09:58:19.162212 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:58:19.178129 master-0 kubenswrapper[8244]: I0318 09:58:19.178039 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" podStartSLOduration=3.057927995 podStartE2EDuration="27.178015019s" podCreationTimestamp="2026-03-18 09:57:52 +0000 UTC" firstStartedPulling="2026-03-18 09:57:53.649759713 +0000 UTC m=+190.129495841" lastFinishedPulling="2026-03-18 09:58:17.769846737 +0000 UTC m=+214.249582865" observedRunningTime="2026-03-18 09:58:19.173844277 +0000 UTC m=+215.653580405" watchObservedRunningTime="2026-03-18 09:58:19.178015019 +0000 UTC m=+215.657751147" Mar 18 09:58:19.216099 master-0 kubenswrapper[8244]: I0318 09:58:19.215959 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" podStartSLOduration=2.99100269 podStartE2EDuration="27.215941855s" podCreationTimestamp="2026-03-18 09:57:52 +0000 UTC" firstStartedPulling="2026-03-18 09:57:53.496603512 +0000 UTC m=+189.976339640" lastFinishedPulling="2026-03-18 09:58:17.721542677 +0000 UTC m=+214.201278805" observedRunningTime="2026-03-18 09:58:19.205672964 +0000 UTC m=+215.685409092" watchObservedRunningTime="2026-03-18 09:58:19.215941855 +0000 UTC m=+215.695677983" Mar 18 09:58:19.243703 master-0 kubenswrapper[8244]: I0318 09:58:19.243613 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" podStartSLOduration=3.152661649 podStartE2EDuration="27.243593681s" podCreationTimestamp="2026-03-18 09:57:52 +0000 UTC" firstStartedPulling="2026-03-18 09:57:53.630684187 +0000 UTC m=+190.110420315" lastFinishedPulling="2026-03-18 09:58:17.721616179 +0000 UTC m=+214.201352347" observedRunningTime="2026-03-18 09:58:19.241282274 +0000 UTC m=+215.721018422" watchObservedRunningTime="2026-03-18 09:58:19.243593681 +0000 UTC m=+215.723329829" Mar 18 09:58:19.257317 master-0 kubenswrapper[8244]: I0318 09:58:19.257249 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-auth-proxy-config\") pod \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " Mar 18 09:58:19.257317 master-0 kubenswrapper[8244]: I0318 09:58:19.257331 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-config\") pod \"22c13008-d600-417e-9df1-96f3f579a11f\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " Mar 18 09:58:19.257682 master-0 kubenswrapper[8244]: I0318 09:58:19.257350 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-auth-proxy-config\") pod \"22c13008-d600-417e-9df1-96f3f579a11f\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " Mar 18 09:58:19.257682 master-0 kubenswrapper[8244]: I0318 09:58:19.257395 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-images\") pod \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " Mar 18 09:58:19.257682 master-0 kubenswrapper[8244]: I0318 09:58:19.257419 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdjwg\" (UniqueName: \"kubernetes.io/projected/22c13008-d600-417e-9df1-96f3f579a11f-kube-api-access-sdjwg\") pod \"22c13008-d600-417e-9df1-96f3f579a11f\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " Mar 18 09:58:19.257682 master-0 kubenswrapper[8244]: I0318 09:58:19.257437 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-cloud-controller-manager-operator-tls\") pod \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " Mar 18 09:58:19.257682 master-0 kubenswrapper[8244]: I0318 09:58:19.257456 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcsbr\" (UniqueName: \"kubernetes.io/projected/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-kube-api-access-kcsbr\") pod \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " Mar 18 09:58:19.257682 master-0 kubenswrapper[8244]: I0318 09:58:19.257477 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c13008-d600-417e-9df1-96f3f579a11f-machine-approver-tls\") pod \"22c13008-d600-417e-9df1-96f3f579a11f\" (UID: \"22c13008-d600-417e-9df1-96f3f579a11f\") " Mar 18 09:58:19.257682 master-0 kubenswrapper[8244]: I0318 09:58:19.257496 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-host-etc-kube\") pod \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\" (UID: \"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887\") " Mar 18 09:58:19.258111 master-0 kubenswrapper[8244]: I0318 09:58:19.257736 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" (UID: "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:58:19.259901 master-0 kubenswrapper[8244]: I0318 09:58:19.258680 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" (UID: "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:58:19.259901 master-0 kubenswrapper[8244]: I0318 09:58:19.259150 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c13008-d600-417e-9df1-96f3f579a11f" (UID: "22c13008-d600-417e-9df1-96f3f579a11f"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:58:19.259901 master-0 kubenswrapper[8244]: I0318 09:58:19.259389 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-images" (OuterVolumeSpecName: "images") pod "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" (UID: "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:58:19.259901 master-0 kubenswrapper[8244]: I0318 09:58:19.259800 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-config" (OuterVolumeSpecName: "config") pod "22c13008-d600-417e-9df1-96f3f579a11f" (UID: "22c13008-d600-417e-9df1-96f3f579a11f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:58:19.273492 master-0 kubenswrapper[8244]: I0318 09:58:19.272976 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" (UID: "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:58:19.273492 master-0 kubenswrapper[8244]: I0318 09:58:19.273102 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c13008-d600-417e-9df1-96f3f579a11f-kube-api-access-sdjwg" (OuterVolumeSpecName: "kube-api-access-sdjwg") pod "22c13008-d600-417e-9df1-96f3f579a11f" (UID: "22c13008-d600-417e-9df1-96f3f579a11f"). InnerVolumeSpecName "kube-api-access-sdjwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:58:19.273492 master-0 kubenswrapper[8244]: I0318 09:58:19.273237 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-kube-api-access-kcsbr" (OuterVolumeSpecName: "kube-api-access-kcsbr") pod "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" (UID: "e9eb27ff-f89f-4c0e-abac-9fdfd8cee887"). InnerVolumeSpecName "kube-api-access-kcsbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:58:19.276157 master-0 kubenswrapper[8244]: I0318 09:58:19.276092 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c13008-d600-417e-9df1-96f3f579a11f-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c13008-d600-417e-9df1-96f3f579a11f" (UID: "22c13008-d600-417e-9df1-96f3f579a11f"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:58:19.278961 master-0 kubenswrapper[8244]: I0318 09:58:19.278900 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" podStartSLOduration=4.272912667 podStartE2EDuration="28.278886233s" podCreationTimestamp="2026-03-18 09:57:51 +0000 UTC" firstStartedPulling="2026-03-18 09:57:53.696530826 +0000 UTC m=+190.176266954" lastFinishedPulling="2026-03-18 09:58:17.702504392 +0000 UTC m=+214.182240520" observedRunningTime="2026-03-18 09:58:19.276277119 +0000 UTC m=+215.756013247" watchObservedRunningTime="2026-03-18 09:58:19.278886233 +0000 UTC m=+215.758622361" Mar 18 09:58:19.358778 master-0 kubenswrapper[8244]: I0318 09:58:19.358711 8244 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:58:19.358778 master-0 kubenswrapper[8244]: I0318 09:58:19.358756 8244 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c13008-d600-417e-9df1-96f3f579a11f-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:58:19.358778 master-0 kubenswrapper[8244]: I0318 09:58:19.358767 8244 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-images\") on node \"master-0\" DevicePath \"\"" Mar 18 09:58:19.358778 master-0 kubenswrapper[8244]: I0318 09:58:19.358777 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdjwg\" (UniqueName: \"kubernetes.io/projected/22c13008-d600-417e-9df1-96f3f579a11f-kube-api-access-sdjwg\") on node \"master-0\" DevicePath \"\"" Mar 18 09:58:19.358778 master-0 kubenswrapper[8244]: I0318 09:58:19.358789 8244 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 09:58:19.358778 master-0 kubenswrapper[8244]: I0318 09:58:19.358799 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcsbr\" (UniqueName: \"kubernetes.io/projected/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-kube-api-access-kcsbr\") on node \"master-0\" DevicePath \"\"" Mar 18 09:58:19.359244 master-0 kubenswrapper[8244]: I0318 09:58:19.358808 8244 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c13008-d600-417e-9df1-96f3f579a11f-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 09:58:19.359244 master-0 kubenswrapper[8244]: I0318 09:58:19.358822 8244 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 18 09:58:19.359244 master-0 kubenswrapper[8244]: I0318 09:58:19.358849 8244 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:58:19.957040 master-0 kubenswrapper[8244]: I0318 09:58:19.956973 8244 generic.go:334] "Generic (PLEG): container finished" podID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerID="1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61" exitCode=0 Mar 18 09:58:19.957701 master-0 kubenswrapper[8244]: I0318 09:58:19.957679 8244 generic.go:334] "Generic (PLEG): container finished" podID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerID="e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402" exitCode=0 Mar 18 09:58:19.957853 master-0 kubenswrapper[8244]: I0318 09:58:19.957807 8244 generic.go:334] "Generic (PLEG): container finished" podID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerID="83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04" exitCode=0 Mar 18 09:58:19.958022 master-0 kubenswrapper[8244]: I0318 09:58:19.957033 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" Mar 18 09:58:19.958115 master-0 kubenswrapper[8244]: I0318 09:58:19.957051 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" event={"ID":"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887","Type":"ContainerDied","Data":"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61"} Mar 18 09:58:19.958184 master-0 kubenswrapper[8244]: I0318 09:58:19.958132 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" event={"ID":"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887","Type":"ContainerDied","Data":"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402"} Mar 18 09:58:19.958184 master-0 kubenswrapper[8244]: I0318 09:58:19.958150 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" event={"ID":"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887","Type":"ContainerDied","Data":"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04"} Mar 18 09:58:19.958184 master-0 kubenswrapper[8244]: I0318 09:58:19.958163 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw" event={"ID":"e9eb27ff-f89f-4c0e-abac-9fdfd8cee887","Type":"ContainerDied","Data":"e4e54b7e57036564c391f3dbe2a0d0cddde83c0e5f2501af4faca38ba51ff057"} Mar 18 09:58:19.958184 master-0 kubenswrapper[8244]: I0318 09:58:19.958183 8244 scope.go:117] "RemoveContainer" containerID="1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61" Mar 18 09:58:19.962505 master-0 kubenswrapper[8244]: I0318 09:58:19.961708 8244 generic.go:334] "Generic (PLEG): container finished" podID="22c13008-d600-417e-9df1-96f3f579a11f" containerID="6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8" exitCode=0 Mar 18 09:58:19.962505 master-0 kubenswrapper[8244]: I0318 09:58:19.961753 8244 generic.go:334] "Generic (PLEG): container finished" podID="22c13008-d600-417e-9df1-96f3f579a11f" containerID="bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b" exitCode=0 Mar 18 09:58:19.962505 master-0 kubenswrapper[8244]: I0318 09:58:19.961804 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" Mar 18 09:58:19.962505 master-0 kubenswrapper[8244]: I0318 09:58:19.961816 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" event={"ID":"22c13008-d600-417e-9df1-96f3f579a11f","Type":"ContainerDied","Data":"6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8"} Mar 18 09:58:19.962505 master-0 kubenswrapper[8244]: I0318 09:58:19.961930 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" event={"ID":"22c13008-d600-417e-9df1-96f3f579a11f","Type":"ContainerDied","Data":"bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b"} Mar 18 09:58:19.962505 master-0 kubenswrapper[8244]: I0318 09:58:19.961963 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759" event={"ID":"22c13008-d600-417e-9df1-96f3f579a11f","Type":"ContainerDied","Data":"2149c630333ae9ebbeba145d1b4c7914481957cb46004d4b6849e674c4e85846"} Mar 18 09:58:19.992208 master-0 kubenswrapper[8244]: I0318 09:58:19.992166 8244 scope.go:117] "RemoveContainer" containerID="e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402" Mar 18 09:58:20.008887 master-0 kubenswrapper[8244]: I0318 09:58:20.007991 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw"] Mar 18 09:58:20.011444 master-0 kubenswrapper[8244]: I0318 09:58:20.010992 8244 scope.go:117] "RemoveContainer" containerID="83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04" Mar 18 09:58:20.012958 master-0 kubenswrapper[8244]: I0318 09:58:20.012909 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-tntvw"] Mar 18 09:58:20.022972 master-0 kubenswrapper[8244]: I0318 09:58:20.022926 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759"] Mar 18 09:58:20.037858 master-0 kubenswrapper[8244]: I0318 09:58:20.037574 8244 scope.go:117] "RemoveContainer" containerID="1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61" Mar 18 09:58:20.039713 master-0 kubenswrapper[8244]: E0318 09:58:20.039623 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61\": container with ID starting with 1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61 not found: ID does not exist" containerID="1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61" Mar 18 09:58:20.039713 master-0 kubenswrapper[8244]: I0318 09:58:20.039677 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61"} err="failed to get container status \"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61\": rpc error: code = NotFound desc = could not find container \"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61\": container with ID starting with 1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61 not found: ID does not exist" Mar 18 09:58:20.039713 master-0 kubenswrapper[8244]: I0318 09:58:20.039711 8244 scope.go:117] "RemoveContainer" containerID="e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402" Mar 18 09:58:20.040979 master-0 kubenswrapper[8244]: E0318 09:58:20.040944 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402\": container with ID starting with e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402 not found: ID does not exist" containerID="e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402" Mar 18 09:58:20.041043 master-0 kubenswrapper[8244]: I0318 09:58:20.041001 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402"} err="failed to get container status \"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402\": rpc error: code = NotFound desc = could not find container \"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402\": container with ID starting with e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402 not found: ID does not exist" Mar 18 09:58:20.041082 master-0 kubenswrapper[8244]: I0318 09:58:20.041042 8244 scope.go:117] "RemoveContainer" containerID="83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04" Mar 18 09:58:20.042463 master-0 kubenswrapper[8244]: E0318 09:58:20.042079 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04\": container with ID starting with 83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04 not found: ID does not exist" containerID="83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04" Mar 18 09:58:20.042463 master-0 kubenswrapper[8244]: I0318 09:58:20.042165 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04"} err="failed to get container status \"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04\": rpc error: code = NotFound desc = could not find container \"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04\": container with ID starting with 83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04 not found: ID does not exist" Mar 18 09:58:20.042463 master-0 kubenswrapper[8244]: I0318 09:58:20.042208 8244 scope.go:117] "RemoveContainer" containerID="1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61" Mar 18 09:58:20.042757 master-0 kubenswrapper[8244]: I0318 09:58:20.042668 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61"} err="failed to get container status \"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61\": rpc error: code = NotFound desc = could not find container \"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61\": container with ID starting with 1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61 not found: ID does not exist" Mar 18 09:58:20.042757 master-0 kubenswrapper[8244]: I0318 09:58:20.042699 8244 scope.go:117] "RemoveContainer" containerID="e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402" Mar 18 09:58:20.048307 master-0 kubenswrapper[8244]: I0318 09:58:20.048220 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402"} err="failed to get container status \"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402\": rpc error: code = NotFound desc = could not find container \"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402\": container with ID starting with e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402 not found: ID does not exist" Mar 18 09:58:20.048567 master-0 kubenswrapper[8244]: I0318 09:58:20.048325 8244 scope.go:117] "RemoveContainer" containerID="83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04" Mar 18 09:58:20.051913 master-0 kubenswrapper[8244]: I0318 09:58:20.051867 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-wk759"] Mar 18 09:58:20.060575 master-0 kubenswrapper[8244]: I0318 09:58:20.060493 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04"} err="failed to get container status \"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04\": rpc error: code = NotFound desc = could not find container \"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04\": container with ID starting with 83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04 not found: ID does not exist" Mar 18 09:58:20.060575 master-0 kubenswrapper[8244]: I0318 09:58:20.060558 8244 scope.go:117] "RemoveContainer" containerID="1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61" Mar 18 09:58:20.063188 master-0 kubenswrapper[8244]: I0318 09:58:20.061218 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61"} err="failed to get container status \"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61\": rpc error: code = NotFound desc = could not find container \"1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61\": container with ID starting with 1bb8026ca0b78003e696204c7a2846d20107df20ede54f627ad655b22c271d61 not found: ID does not exist" Mar 18 09:58:20.063188 master-0 kubenswrapper[8244]: I0318 09:58:20.061269 8244 scope.go:117] "RemoveContainer" containerID="e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402" Mar 18 09:58:20.063188 master-0 kubenswrapper[8244]: I0318 09:58:20.061626 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402"} err="failed to get container status \"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402\": rpc error: code = NotFound desc = could not find container \"e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402\": container with ID starting with e5e01e7b8d76a2412b274d7ca8d554c2e9afdf8f5f8ae4d5d3d77abaa0487402 not found: ID does not exist" Mar 18 09:58:20.063188 master-0 kubenswrapper[8244]: I0318 09:58:20.061686 8244 scope.go:117] "RemoveContainer" containerID="83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04" Mar 18 09:58:20.063188 master-0 kubenswrapper[8244]: I0318 09:58:20.061985 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04"} err="failed to get container status \"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04\": rpc error: code = NotFound desc = could not find container \"83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04\": container with ID starting with 83feb8ecbad2c2da02a0e90f1f2c712f11c7dcf9362a577ea04d30ee9235ed04 not found: ID does not exist" Mar 18 09:58:20.063188 master-0 kubenswrapper[8244]: I0318 09:58:20.062005 8244 scope.go:117] "RemoveContainer" containerID="6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8" Mar 18 09:58:20.074464 master-0 kubenswrapper[8244]: I0318 09:58:20.074394 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4"] Mar 18 09:58:20.074748 master-0 kubenswrapper[8244]: E0318 09:58:20.074727 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="cluster-cloud-controller-manager" Mar 18 09:58:20.074788 master-0 kubenswrapper[8244]: I0318 09:58:20.074752 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="cluster-cloud-controller-manager" Mar 18 09:58:20.074840 master-0 kubenswrapper[8244]: E0318 09:58:20.074790 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c13008-d600-417e-9df1-96f3f579a11f" containerName="kube-rbac-proxy" Mar 18 09:58:20.074840 master-0 kubenswrapper[8244]: I0318 09:58:20.074803 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c13008-d600-417e-9df1-96f3f579a11f" containerName="kube-rbac-proxy" Mar 18 09:58:20.074904 master-0 kubenswrapper[8244]: E0318 09:58:20.074849 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="kube-rbac-proxy" Mar 18 09:58:20.074904 master-0 kubenswrapper[8244]: I0318 09:58:20.074865 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="kube-rbac-proxy" Mar 18 09:58:20.074904 master-0 kubenswrapper[8244]: E0318 09:58:20.074885 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c13008-d600-417e-9df1-96f3f579a11f" containerName="machine-approver-controller" Mar 18 09:58:20.074904 master-0 kubenswrapper[8244]: I0318 09:58:20.074897 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c13008-d600-417e-9df1-96f3f579a11f" containerName="machine-approver-controller" Mar 18 09:58:20.075053 master-0 kubenswrapper[8244]: E0318 09:58:20.074922 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="config-sync-controllers" Mar 18 09:58:20.075053 master-0 kubenswrapper[8244]: I0318 09:58:20.074936 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="config-sync-controllers" Mar 18 09:58:20.075141 master-0 kubenswrapper[8244]: I0318 09:58:20.075114 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c13008-d600-417e-9df1-96f3f579a11f" containerName="kube-rbac-proxy" Mar 18 09:58:20.075180 master-0 kubenswrapper[8244]: I0318 09:58:20.075168 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c13008-d600-417e-9df1-96f3f579a11f" containerName="machine-approver-controller" Mar 18 09:58:20.075222 master-0 kubenswrapper[8244]: I0318 09:58:20.075193 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="cluster-cloud-controller-manager" Mar 18 09:58:20.075222 master-0 kubenswrapper[8244]: I0318 09:58:20.075210 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="config-sync-controllers" Mar 18 09:58:20.075288 master-0 kubenswrapper[8244]: I0318 09:58:20.075225 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" containerName="kube-rbac-proxy" Mar 18 09:58:20.077157 master-0 kubenswrapper[8244]: I0318 09:58:20.076681 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.077950 master-0 kubenswrapper[8244]: I0318 09:58:20.077906 8244 scope.go:117] "RemoveContainer" containerID="bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b" Mar 18 09:58:20.079727 master-0 kubenswrapper[8244]: I0318 09:58:20.079066 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-fr2b8" Mar 18 09:58:20.083052 master-0 kubenswrapper[8244]: I0318 09:58:20.082991 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 09:58:20.083126 master-0 kubenswrapper[8244]: I0318 09:58:20.083085 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 09:58:20.083173 master-0 kubenswrapper[8244]: I0318 09:58:20.083151 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 09:58:20.083627 master-0 kubenswrapper[8244]: I0318 09:58:20.083558 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:58:20.083792 master-0 kubenswrapper[8244]: I0318 09:58:20.083765 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:58:20.086728 master-0 kubenswrapper[8244]: I0318 09:58:20.086618 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh"] Mar 18 09:58:20.087602 master-0 kubenswrapper[8244]: I0318 09:58:20.087547 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.091500 master-0 kubenswrapper[8244]: I0318 09:58:20.089205 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-hvm64" Mar 18 09:58:20.091500 master-0 kubenswrapper[8244]: I0318 09:58:20.089463 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 09:58:20.091500 master-0 kubenswrapper[8244]: I0318 09:58:20.089532 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 09:58:20.091500 master-0 kubenswrapper[8244]: I0318 09:58:20.089772 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 09:58:20.091500 master-0 kubenswrapper[8244]: I0318 09:58:20.089791 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 09:58:20.091500 master-0 kubenswrapper[8244]: I0318 09:58:20.089965 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 09:58:20.108887 master-0 kubenswrapper[8244]: I0318 09:58:20.108803 8244 scope.go:117] "RemoveContainer" containerID="6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8" Mar 18 09:58:20.114400 master-0 kubenswrapper[8244]: E0318 09:58:20.112668 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8\": container with ID starting with 6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8 not found: ID does not exist" containerID="6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8" Mar 18 09:58:20.114400 master-0 kubenswrapper[8244]: I0318 09:58:20.112725 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8"} err="failed to get container status \"6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8\": rpc error: code = NotFound desc = could not find container \"6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8\": container with ID starting with 6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8 not found: ID does not exist" Mar 18 09:58:20.114400 master-0 kubenswrapper[8244]: I0318 09:58:20.112762 8244 scope.go:117] "RemoveContainer" containerID="bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b" Mar 18 09:58:20.114400 master-0 kubenswrapper[8244]: E0318 09:58:20.113178 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b\": container with ID starting with bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b not found: ID does not exist" containerID="bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b" Mar 18 09:58:20.114400 master-0 kubenswrapper[8244]: I0318 09:58:20.113219 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b"} err="failed to get container status \"bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b\": rpc error: code = NotFound desc = could not find container \"bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b\": container with ID starting with bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b not found: ID does not exist" Mar 18 09:58:20.114400 master-0 kubenswrapper[8244]: I0318 09:58:20.113247 8244 scope.go:117] "RemoveContainer" containerID="6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8" Mar 18 09:58:20.114400 master-0 kubenswrapper[8244]: I0318 09:58:20.113556 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8"} err="failed to get container status \"6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8\": rpc error: code = NotFound desc = could not find container \"6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8\": container with ID starting with 6941d2dc3af8a6f0a7a42402a8b6a8cb7e3939aea2a08fc60751b9b1690867c8 not found: ID does not exist" Mar 18 09:58:20.114400 master-0 kubenswrapper[8244]: I0318 09:58:20.113601 8244 scope.go:117] "RemoveContainer" containerID="bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b" Mar 18 09:58:20.114400 master-0 kubenswrapper[8244]: I0318 09:58:20.114070 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b"} err="failed to get container status \"bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b\": rpc error: code = NotFound desc = could not find container \"bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b\": container with ID starting with bfb077b2983815767698c9710bbdf0908e95e510a0d18533f2ac399dba72b53b not found: ID does not exist" Mar 18 09:58:20.177969 master-0 kubenswrapper[8244]: I0318 09:58:20.177890 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.177969 master-0 kubenswrapper[8244]: I0318 09:58:20.177952 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.178292 master-0 kubenswrapper[8244]: I0318 09:58:20.178212 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad4aa30-f7d5-47ca-b01e-2643f7195685-machine-approver-tls\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.178417 master-0 kubenswrapper[8244]: I0318 09:58:20.178375 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d89r9\" (UniqueName: \"kubernetes.io/projected/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-kube-api-access-d89r9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.178496 master-0 kubenswrapper[8244]: I0318 09:58:20.178445 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.178496 master-0 kubenswrapper[8244]: I0318 09:58:20.178478 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.178588 master-0 kubenswrapper[8244]: I0318 09:58:20.178570 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp8vt\" (UniqueName: \"kubernetes.io/projected/1ad4aa30-f7d5-47ca-b01e-2643f7195685-kube-api-access-fp8vt\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.178748 master-0 kubenswrapper[8244]: I0318 09:58:20.178679 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-auth-proxy-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.178864 master-0 kubenswrapper[8244]: I0318 09:58:20.178789 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.280148 master-0 kubenswrapper[8244]: I0318 09:58:20.280003 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp8vt\" (UniqueName: \"kubernetes.io/projected/1ad4aa30-f7d5-47ca-b01e-2643f7195685-kube-api-access-fp8vt\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.280148 master-0 kubenswrapper[8244]: I0318 09:58:20.280100 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-auth-proxy-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.280410 master-0 kubenswrapper[8244]: I0318 09:58:20.280156 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.280410 master-0 kubenswrapper[8244]: I0318 09:58:20.280382 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.280495 master-0 kubenswrapper[8244]: I0318 09:58:20.280433 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.280557 master-0 kubenswrapper[8244]: I0318 09:58:20.280489 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad4aa30-f7d5-47ca-b01e-2643f7195685-machine-approver-tls\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.280622 master-0 kubenswrapper[8244]: I0318 09:58:20.280554 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.280622 master-0 kubenswrapper[8244]: I0318 09:58:20.280553 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d89r9\" (UniqueName: \"kubernetes.io/projected/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-kube-api-access-d89r9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.280701 master-0 kubenswrapper[8244]: I0318 09:58:20.280640 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.280701 master-0 kubenswrapper[8244]: I0318 09:58:20.280684 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.281643 master-0 kubenswrapper[8244]: I0318 09:58:20.281468 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-auth-proxy-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.281643 master-0 kubenswrapper[8244]: I0318 09:58:20.281576 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.281643 master-0 kubenswrapper[8244]: I0318 09:58:20.281579 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.281781 master-0 kubenswrapper[8244]: I0318 09:58:20.281732 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.284787 master-0 kubenswrapper[8244]: I0318 09:58:20.284743 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.285980 master-0 kubenswrapper[8244]: I0318 09:58:20.285931 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad4aa30-f7d5-47ca-b01e-2643f7195685-machine-approver-tls\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.302182 master-0 kubenswrapper[8244]: I0318 09:58:20.298742 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp8vt\" (UniqueName: \"kubernetes.io/projected/1ad4aa30-f7d5-47ca-b01e-2643f7195685-kube-api-access-fp8vt\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.307843 master-0 kubenswrapper[8244]: I0318 09:58:20.307769 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d89r9\" (UniqueName: \"kubernetes.io/projected/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-kube-api-access-d89r9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.413645 master-0 kubenswrapper[8244]: I0318 09:58:20.413576 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 09:58:20.423682 master-0 kubenswrapper[8244]: I0318 09:58:20.423641 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 09:58:20.439850 master-0 kubenswrapper[8244]: W0318 09:58:20.439754 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8641c1d1_dd79_4f1f_9343_52d1ee6faf9f.slice/crio-308f045ad48f29df3fbed5a202a7ccbbb9fcab711591e6a10e9dfffd40505d42 WatchSource:0}: Error finding container 308f045ad48f29df3fbed5a202a7ccbbb9fcab711591e6a10e9dfffd40505d42: Status 404 returned error can't find the container with id 308f045ad48f29df3fbed5a202a7ccbbb9fcab711591e6a10e9dfffd40505d42 Mar 18 09:58:20.975864 master-0 kubenswrapper[8244]: I0318 09:58:20.975159 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" event={"ID":"1ad4aa30-f7d5-47ca-b01e-2643f7195685","Type":"ContainerStarted","Data":"45e30b02e40d619266e73fb1cdbab98ca97d3bd5d08ae86a8dce191c1d11acca"} Mar 18 09:58:20.975864 master-0 kubenswrapper[8244]: I0318 09:58:20.975324 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" event={"ID":"1ad4aa30-f7d5-47ca-b01e-2643f7195685","Type":"ContainerStarted","Data":"a70d40880058e84142e4d02963e7aba37e4a753a42ab982dbb781aba6c1199ec"} Mar 18 09:58:20.978918 master-0 kubenswrapper[8244]: I0318 09:58:20.978880 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" event={"ID":"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f","Type":"ContainerStarted","Data":"c9db2465522a9f31bfdb29b4350bcd424f2fa2f288ceeee292a0e5256f8ed40d"} Mar 18 09:58:20.979010 master-0 kubenswrapper[8244]: I0318 09:58:20.978959 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" event={"ID":"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f","Type":"ContainerStarted","Data":"592ca06fab8bb0c93dfd3465f07a7c645bf00008deb42f76b6d5198afd1f495a"} Mar 18 09:58:20.979010 master-0 kubenswrapper[8244]: I0318 09:58:20.978982 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" event={"ID":"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f","Type":"ContainerStarted","Data":"308f045ad48f29df3fbed5a202a7ccbbb9fcab711591e6a10e9dfffd40505d42"} Mar 18 09:58:21.741576 master-0 kubenswrapper[8244]: I0318 09:58:21.741119 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c13008-d600-417e-9df1-96f3f579a11f" path="/var/lib/kubelet/pods/22c13008-d600-417e-9df1-96f3f579a11f/volumes" Mar 18 09:58:21.742546 master-0 kubenswrapper[8244]: I0318 09:58:21.742455 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9eb27ff-f89f-4c0e-abac-9fdfd8cee887" path="/var/lib/kubelet/pods/e9eb27ff-f89f-4c0e-abac-9fdfd8cee887/volumes" Mar 18 09:58:21.989141 master-0 kubenswrapper[8244]: I0318 09:58:21.989104 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" event={"ID":"1ad4aa30-f7d5-47ca-b01e-2643f7195685","Type":"ContainerStarted","Data":"989ed9d1224874eccaf2482bae9307a2390fd6b1f5f7b0d51c60b2a5d20c283b"} Mar 18 09:58:21.993267 master-0 kubenswrapper[8244]: I0318 09:58:21.993189 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx"] Mar 18 09:58:21.994082 master-0 kubenswrapper[8244]: I0318 09:58:21.994066 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.001067 master-0 kubenswrapper[8244]: I0318 09:58:22.001022 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-84shv" Mar 18 09:58:22.004396 master-0 kubenswrapper[8244]: I0318 09:58:22.004362 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" event={"ID":"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f","Type":"ContainerStarted","Data":"1fc49b5683a146e6d80773e2583f6558ac7db1cf12f6ac62b388e7a5c1244f4c"} Mar 18 09:58:22.014664 master-0 kubenswrapper[8244]: I0318 09:58:22.014612 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 09:58:22.045242 master-0 kubenswrapper[8244]: I0318 09:58:22.045170 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx"] Mar 18 09:58:22.046573 master-0 kubenswrapper[8244]: I0318 09:58:22.046534 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" podStartSLOduration=2.046522465 podStartE2EDuration="2.046522465s" podCreationTimestamp="2026-03-18 09:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:58:22.040034486 +0000 UTC m=+218.519770654" watchObservedRunningTime="2026-03-18 09:58:22.046522465 +0000 UTC m=+218.526258593" Mar 18 09:58:22.089852 master-0 kubenswrapper[8244]: I0318 09:58:22.087777 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" podStartSLOduration=2.087761902 podStartE2EDuration="2.087761902s" podCreationTimestamp="2026-03-18 09:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:58:22.085486847 +0000 UTC m=+218.565222975" watchObservedRunningTime="2026-03-18 09:58:22.087761902 +0000 UTC m=+218.567498030" Mar 18 09:58:22.104631 master-0 kubenswrapper[8244]: I0318 09:58:22.104569 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.104865 master-0 kubenswrapper[8244]: I0318 09:58:22.104647 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.104865 master-0 kubenswrapper[8244]: I0318 09:58:22.104719 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmxj9\" (UniqueName: \"kubernetes.io/projected/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-kube-api-access-gmxj9\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.206065 master-0 kubenswrapper[8244]: I0318 09:58:22.206003 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.206584 master-0 kubenswrapper[8244]: I0318 09:58:22.206528 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.206687 master-0 kubenswrapper[8244]: I0318 09:58:22.206614 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmxj9\" (UniqueName: \"kubernetes.io/projected/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-kube-api-access-gmxj9\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.207561 master-0 kubenswrapper[8244]: I0318 09:58:22.207525 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.214787 master-0 kubenswrapper[8244]: I0318 09:58:22.214723 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.229357 master-0 kubenswrapper[8244]: I0318 09:58:22.229296 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmxj9\" (UniqueName: \"kubernetes.io/projected/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-kube-api-access-gmxj9\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.330311 master-0 kubenswrapper[8244]: I0318 09:58:22.330222 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 09:58:22.658237 master-0 kubenswrapper[8244]: I0318 09:58:22.658106 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:58:22.759642 master-0 kubenswrapper[8244]: I0318 09:58:22.759570 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx"] Mar 18 09:58:23.015968 master-0 kubenswrapper[8244]: I0318 09:58:23.015922 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" event={"ID":"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0","Type":"ContainerStarted","Data":"c8c319ddb107c3bc56c6d9fe6eeed7e7744a57b20e36ccaa20a733dd325d4c8f"} Mar 18 09:58:23.016378 master-0 kubenswrapper[8244]: I0318 09:58:23.015975 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" event={"ID":"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0","Type":"ContainerStarted","Data":"8e83e941e1bb6d2e2e4ed50989f8c4a7c436dc56c6018257d976ac9218210eba"} Mar 18 09:58:23.056703 master-0 kubenswrapper[8244]: I0318 09:58:23.056648 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx"] Mar 18 09:58:23.057375 master-0 kubenswrapper[8244]: I0318 09:58:23.057353 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 09:58:23.059178 master-0 kubenswrapper[8244]: I0318 09:58:23.059148 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 09:58:23.061088 master-0 kubenswrapper[8244]: I0318 09:58:23.061059 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv"] Mar 18 09:58:23.061946 master-0 kubenswrapper[8244]: I0318 09:58:23.061922 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" Mar 18 09:58:23.063519 master-0 kubenswrapper[8244]: I0318 09:58:23.063469 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7dcf5569b5-82tbk"] Mar 18 09:58:23.064192 master-0 kubenswrapper[8244]: I0318 09:58:23.064166 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.066739 master-0 kubenswrapper[8244]: I0318 09:58:23.066702 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 09:58:23.067672 master-0 kubenswrapper[8244]: I0318 09:58:23.067628 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 09:58:23.067931 master-0 kubenswrapper[8244]: I0318 09:58:23.067898 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 09:58:23.067996 master-0 kubenswrapper[8244]: I0318 09:58:23.067936 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 09:58:23.068054 master-0 kubenswrapper[8244]: I0318 09:58:23.068036 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 09:58:23.068130 master-0 kubenswrapper[8244]: I0318 09:58:23.068104 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 09:58:23.078144 master-0 kubenswrapper[8244]: I0318 09:58:23.078097 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv"] Mar 18 09:58:23.081068 master-0 kubenswrapper[8244]: I0318 09:58:23.081047 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx"] Mar 18 09:58:23.087179 master-0 kubenswrapper[8244]: I0318 09:58:23.087138 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rzksb"] Mar 18 09:58:23.088196 master-0 kubenswrapper[8244]: I0318 09:58:23.088168 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 09:58:23.091903 master-0 kubenswrapper[8244]: I0318 09:58:23.091871 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 09:58:23.092146 master-0 kubenswrapper[8244]: I0318 09:58:23.092129 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 09:58:23.092306 master-0 kubenswrapper[8244]: I0318 09:58:23.092282 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-945t9" Mar 18 09:58:23.092353 master-0 kubenswrapper[8244]: I0318 09:58:23.092342 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 09:58:23.098272 master-0 kubenswrapper[8244]: I0318 09:58:23.098237 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rzksb"] Mar 18 09:58:23.123093 master-0 kubenswrapper[8244]: I0318 09:58:23.117793 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/582d2ba8-1210-47d0-a530-0b20b2fdde22-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4wcqx\" (UID: \"582d2ba8-1210-47d0-a530-0b20b2fdde22\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 09:58:23.123093 master-0 kubenswrapper[8244]: I0318 09:58:23.117897 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74476be5-669a-4737-b93b-c4870423a4da-cert\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 09:58:23.123093 master-0 kubenswrapper[8244]: I0318 09:58:23.118404 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z459j\" (UniqueName: \"kubernetes.io/projected/43d54514-989c-4c82-93f9-153b44eacdd1-kube-api-access-z459j\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.123093 master-0 kubenswrapper[8244]: I0318 09:58:23.118473 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43d54514-989c-4c82-93f9-153b44eacdd1-service-ca-bundle\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.123093 master-0 kubenswrapper[8244]: I0318 09:58:23.118501 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvx6m\" (UniqueName: \"kubernetes.io/projected/74476be5-669a-4737-b93b-c4870423a4da-kube-api-access-nvx6m\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 09:58:23.123093 master-0 kubenswrapper[8244]: I0318 09:58:23.118530 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4qbs\" (UniqueName: \"kubernetes.io/projected/aaadd000-4db7-4264-bfc1-b0ad63c8fb05-kube-api-access-v4qbs\") pod \"network-check-source-b4bf74f6-4kpnv\" (UID: \"aaadd000-4db7-4264-bfc1-b0ad63c8fb05\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" Mar 18 09:58:23.123093 master-0 kubenswrapper[8244]: I0318 09:58:23.118675 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-stats-auth\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.123093 master-0 kubenswrapper[8244]: I0318 09:58:23.119622 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-default-certificate\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.123093 master-0 kubenswrapper[8244]: I0318 09:58:23.119683 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-metrics-certs\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.221693 master-0 kubenswrapper[8244]: I0318 09:58:23.221558 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-stats-auth\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.221903 master-0 kubenswrapper[8244]: I0318 09:58:23.221699 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-default-certificate\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.221903 master-0 kubenswrapper[8244]: I0318 09:58:23.221743 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-metrics-certs\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.221903 master-0 kubenswrapper[8244]: I0318 09:58:23.221799 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/582d2ba8-1210-47d0-a530-0b20b2fdde22-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4wcqx\" (UID: \"582d2ba8-1210-47d0-a530-0b20b2fdde22\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 09:58:23.221903 master-0 kubenswrapper[8244]: I0318 09:58:23.221862 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74476be5-669a-4737-b93b-c4870423a4da-cert\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 09:58:23.221903 master-0 kubenswrapper[8244]: I0318 09:58:23.221902 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z459j\" (UniqueName: \"kubernetes.io/projected/43d54514-989c-4c82-93f9-153b44eacdd1-kube-api-access-z459j\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.222089 master-0 kubenswrapper[8244]: I0318 09:58:23.221946 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43d54514-989c-4c82-93f9-153b44eacdd1-service-ca-bundle\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.222089 master-0 kubenswrapper[8244]: I0318 09:58:23.221978 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvx6m\" (UniqueName: \"kubernetes.io/projected/74476be5-669a-4737-b93b-c4870423a4da-kube-api-access-nvx6m\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 09:58:23.222089 master-0 kubenswrapper[8244]: I0318 09:58:23.222015 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4qbs\" (UniqueName: \"kubernetes.io/projected/aaadd000-4db7-4264-bfc1-b0ad63c8fb05-kube-api-access-v4qbs\") pod \"network-check-source-b4bf74f6-4kpnv\" (UID: \"aaadd000-4db7-4264-bfc1-b0ad63c8fb05\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" Mar 18 09:58:23.224502 master-0 kubenswrapper[8244]: I0318 09:58:23.224467 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43d54514-989c-4c82-93f9-153b44eacdd1-service-ca-bundle\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.229846 master-0 kubenswrapper[8244]: I0318 09:58:23.227298 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-metrics-certs\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.229846 master-0 kubenswrapper[8244]: I0318 09:58:23.228068 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/582d2ba8-1210-47d0-a530-0b20b2fdde22-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4wcqx\" (UID: \"582d2ba8-1210-47d0-a530-0b20b2fdde22\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 09:58:23.229846 master-0 kubenswrapper[8244]: I0318 09:58:23.228086 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-default-certificate\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.230027 master-0 kubenswrapper[8244]: I0318 09:58:23.228942 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-stats-auth\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.233904 master-0 kubenswrapper[8244]: I0318 09:58:23.230650 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74476be5-669a-4737-b93b-c4870423a4da-cert\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 09:58:23.238659 master-0 kubenswrapper[8244]: I0318 09:58:23.238604 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4qbs\" (UniqueName: \"kubernetes.io/projected/aaadd000-4db7-4264-bfc1-b0ad63c8fb05-kube-api-access-v4qbs\") pod \"network-check-source-b4bf74f6-4kpnv\" (UID: \"aaadd000-4db7-4264-bfc1-b0ad63c8fb05\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" Mar 18 09:58:23.241635 master-0 kubenswrapper[8244]: I0318 09:58:23.241586 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvx6m\" (UniqueName: \"kubernetes.io/projected/74476be5-669a-4737-b93b-c4870423a4da-kube-api-access-nvx6m\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 09:58:23.249363 master-0 kubenswrapper[8244]: I0318 09:58:23.249315 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z459j\" (UniqueName: \"kubernetes.io/projected/43d54514-989c-4c82-93f9-153b44eacdd1-kube-api-access-z459j\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.373219 master-0 kubenswrapper[8244]: I0318 09:58:23.373128 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 09:58:23.384510 master-0 kubenswrapper[8244]: I0318 09:58:23.384440 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" Mar 18 09:58:23.397761 master-0 kubenswrapper[8244]: I0318 09:58:23.397698 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:23.414508 master-0 kubenswrapper[8244]: I0318 09:58:23.414256 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 09:58:23.445317 master-0 kubenswrapper[8244]: W0318 09:58:23.445235 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43d54514_989c_4c82_93f9_153b44eacdd1.slice/crio-1a4ce30442f41beafbbdf0d0fcad6e463a305b377720e6060de4d2e923ec7031 WatchSource:0}: Error finding container 1a4ce30442f41beafbbdf0d0fcad6e463a305b377720e6060de4d2e923ec7031: Status 404 returned error can't find the container with id 1a4ce30442f41beafbbdf0d0fcad6e463a305b377720e6060de4d2e923ec7031 Mar 18 09:58:23.878212 master-0 kubenswrapper[8244]: I0318 09:58:23.876137 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx"] Mar 18 09:58:23.920804 master-0 kubenswrapper[8244]: I0318 09:58:23.920741 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv"] Mar 18 09:58:24.024648 master-0 kubenswrapper[8244]: I0318 09:58:24.024066 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rzksb"] Mar 18 09:58:24.040313 master-0 kubenswrapper[8244]: I0318 09:58:24.025403 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" event={"ID":"582d2ba8-1210-47d0-a530-0b20b2fdde22","Type":"ContainerStarted","Data":"ac57b9f21c66b05de1907050080a6922bfb455574d5cf2698b6bd4c95c6df165"} Mar 18 09:58:24.040313 master-0 kubenswrapper[8244]: I0318 09:58:24.027742 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerStarted","Data":"1a4ce30442f41beafbbdf0d0fcad6e463a305b377720e6060de4d2e923ec7031"} Mar 18 09:58:24.040313 master-0 kubenswrapper[8244]: I0318 09:58:24.030667 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" event={"ID":"aaadd000-4db7-4264-bfc1-b0ad63c8fb05","Type":"ContainerStarted","Data":"2e6eabf2087e36d3613240f79a61ceca615c772d05baa285322d88bd80a44773"} Mar 18 09:58:24.040313 master-0 kubenswrapper[8244]: I0318 09:58:24.033775 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" event={"ID":"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0","Type":"ContainerStarted","Data":"d96cb7ab53ca5d6d9af7e4d6ff8a1cc6f8801aa1498657e3ed46c346db13bd52"} Mar 18 09:58:24.068857 master-0 kubenswrapper[8244]: I0318 09:58:24.068026 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" podStartSLOduration=3.067997649 podStartE2EDuration="3.067997649s" podCreationTimestamp="2026-03-18 09:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:58:24.055484103 +0000 UTC m=+220.535220271" watchObservedRunningTime="2026-03-18 09:58:24.067997649 +0000 UTC m=+220.547733787" Mar 18 09:58:24.992955 master-0 kubenswrapper[8244]: I0318 09:58:24.992910 8244 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 09:58:25.053761 master-0 kubenswrapper[8244]: I0318 09:58:25.053711 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" event={"ID":"aaadd000-4db7-4264-bfc1-b0ad63c8fb05","Type":"ContainerStarted","Data":"adc2379edf927c5b1ee8a7b21ec16b5d16e4a3b965bac737174e268e068f12c5"} Mar 18 09:58:25.058537 master-0 kubenswrapper[8244]: I0318 09:58:25.058498 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rzksb" event={"ID":"74476be5-669a-4737-b93b-c4870423a4da","Type":"ContainerStarted","Data":"0c4f99f77add7c35bc2a58be3e90fe712c73afec99d30b2b9f1f5ffa2b32ca37"} Mar 18 09:58:25.058648 master-0 kubenswrapper[8244]: I0318 09:58:25.058542 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rzksb" event={"ID":"74476be5-669a-4737-b93b-c4870423a4da","Type":"ContainerStarted","Data":"fe35b5f7a2da5ebf4bbbee570d091e9d7b1840cb3252d65d0a8b082be7bbb647"} Mar 18 09:58:25.089877 master-0 kubenswrapper[8244]: I0318 09:58:25.087159 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" podStartSLOduration=307.087127695 podStartE2EDuration="5m7.087127695s" podCreationTimestamp="2026-03-18 09:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:58:25.068246114 +0000 UTC m=+221.547982242" watchObservedRunningTime="2026-03-18 09:58:25.087127695 +0000 UTC m=+221.566863823" Mar 18 09:58:25.090399 master-0 kubenswrapper[8244]: I0318 09:58:25.090339 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rzksb" podStartSLOduration=2.090329864 podStartE2EDuration="2.090329864s" podCreationTimestamp="2026-03-18 09:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:58:25.086535801 +0000 UTC m=+221.566271929" watchObservedRunningTime="2026-03-18 09:58:25.090329864 +0000 UTC m=+221.570065992" Mar 18 09:58:27.069722 master-0 kubenswrapper[8244]: I0318 09:58:27.069641 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" event={"ID":"582d2ba8-1210-47d0-a530-0b20b2fdde22","Type":"ContainerStarted","Data":"1d7d1d99b4af090adcb396ed6370fa6ae065a763984050aec6d76d888b70d9a8"} Mar 18 09:58:27.070182 master-0 kubenswrapper[8244]: I0318 09:58:27.070043 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 09:58:27.072650 master-0 kubenswrapper[8244]: I0318 09:58:27.072598 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerStarted","Data":"83d2d113ec64b26f85c2da77fcf83ffd1c0559babf05a97c582bf5bda8d8a7a5"} Mar 18 09:58:27.078479 master-0 kubenswrapper[8244]: I0318 09:58:27.078406 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 09:58:27.120106 master-0 kubenswrapper[8244]: I0318 09:58:27.120012 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" podStartSLOduration=170.548278702 podStartE2EDuration="2m53.119985768s" podCreationTimestamp="2026-03-18 09:55:34 +0000 UTC" firstStartedPulling="2026-03-18 09:58:23.882167759 +0000 UTC m=+220.361903927" lastFinishedPulling="2026-03-18 09:58:26.453874865 +0000 UTC m=+222.933610993" observedRunningTime="2026-03-18 09:58:27.117165219 +0000 UTC m=+223.596901367" watchObservedRunningTime="2026-03-18 09:58:27.119985768 +0000 UTC m=+223.599721946" Mar 18 09:58:27.138649 master-0 kubenswrapper[8244]: I0318 09:58:27.138502 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podStartSLOduration=187.143354091 podStartE2EDuration="3m10.1384832s" podCreationTimestamp="2026-03-18 09:55:17 +0000 UTC" firstStartedPulling="2026-03-18 09:58:23.450251358 +0000 UTC m=+219.929987486" lastFinishedPulling="2026-03-18 09:58:26.445380467 +0000 UTC m=+222.925116595" observedRunningTime="2026-03-18 09:58:27.134617335 +0000 UTC m=+223.614353503" watchObservedRunningTime="2026-03-18 09:58:27.1384832 +0000 UTC m=+223.618219338" Mar 18 09:58:27.399322 master-0 kubenswrapper[8244]: I0318 09:58:27.399157 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:27.402398 master-0 kubenswrapper[8244]: I0318 09:58:27.402303 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:27.402398 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:27.402398 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:27.402398 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:27.402398 master-0 kubenswrapper[8244]: I0318 09:58:27.402379 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:27.433004 master-0 kubenswrapper[8244]: I0318 09:58:27.432944 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9wnkm"] Mar 18 09:58:27.433746 master-0 kubenswrapper[8244]: I0318 09:58:27.433707 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.436419 master-0 kubenswrapper[8244]: I0318 09:58:27.435757 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bncrc" Mar 18 09:58:27.436419 master-0 kubenswrapper[8244]: I0318 09:58:27.435870 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 09:58:27.436419 master-0 kubenswrapper[8244]: I0318 09:58:27.436228 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 09:58:27.483581 master-0 kubenswrapper[8244]: I0318 09:58:27.483545 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4g9s\" (UniqueName: \"kubernetes.io/projected/196e7607-1ddf-467b-9901-b4be746130a1-kube-api-access-l4g9s\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.483886 master-0 kubenswrapper[8244]: I0318 09:58:27.483866 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-node-bootstrap-token\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.484031 master-0 kubenswrapper[8244]: I0318 09:58:27.484012 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-certs\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.584915 master-0 kubenswrapper[8244]: I0318 09:58:27.584858 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-node-bootstrap-token\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.584915 master-0 kubenswrapper[8244]: I0318 09:58:27.584918 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-certs\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.585209 master-0 kubenswrapper[8244]: I0318 09:58:27.585103 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4g9s\" (UniqueName: \"kubernetes.io/projected/196e7607-1ddf-467b-9901-b4be746130a1-kube-api-access-l4g9s\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.590304 master-0 kubenswrapper[8244]: I0318 09:58:27.590224 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-node-bootstrap-token\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.600398 master-0 kubenswrapper[8244]: I0318 09:58:27.600351 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-certs\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.601252 master-0 kubenswrapper[8244]: I0318 09:58:27.601222 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4g9s\" (UniqueName: \"kubernetes.io/projected/196e7607-1ddf-467b-9901-b4be746130a1-kube-api-access-l4g9s\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.775560 master-0 kubenswrapper[8244]: I0318 09:58:27.775252 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 09:58:27.879803 master-0 kubenswrapper[8244]: I0318 09:58:27.879761 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-4q9tr_f076eaf0-b041-4db0-ba06-3d85e23bb654/authentication-operator/1.log" Mar 18 09:58:28.081949 master-0 kubenswrapper[8244]: I0318 09:58:28.081898 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9wnkm" event={"ID":"196e7607-1ddf-467b-9901-b4be746130a1","Type":"ContainerStarted","Data":"bf50bdabce32efe95568748f8024bd8478c5366e0a0da861a88fe82785e83299"} Mar 18 09:58:28.081949 master-0 kubenswrapper[8244]: I0318 09:58:28.081954 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9wnkm" event={"ID":"196e7607-1ddf-467b-9901-b4be746130a1","Type":"ContainerStarted","Data":"dda9475997ae063330eb66def313ccd5f6f56fc68307fe940171e35bbbb378fc"} Mar 18 09:58:28.083565 master-0 kubenswrapper[8244]: I0318 09:58:28.083508 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-4q9tr_f076eaf0-b041-4db0-ba06-3d85e23bb654/authentication-operator/2.log" Mar 18 09:58:28.101580 master-0 kubenswrapper[8244]: I0318 09:58:28.101506 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9wnkm" podStartSLOduration=1.101487855 podStartE2EDuration="1.101487855s" podCreationTimestamp="2026-03-18 09:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:58:28.097667181 +0000 UTC m=+224.577403309" watchObservedRunningTime="2026-03-18 09:58:28.101487855 +0000 UTC m=+224.581223983" Mar 18 09:58:28.176166 master-0 kubenswrapper[8244]: I0318 09:58:28.176052 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-886k6"] Mar 18 09:58:28.176944 master-0 kubenswrapper[8244]: I0318 09:58:28.176911 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.181372 master-0 kubenswrapper[8244]: I0318 09:58:28.180174 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 09:58:28.181372 master-0 kubenswrapper[8244]: I0318 09:58:28.180203 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bs6wb" Mar 18 09:58:28.181372 master-0 kubenswrapper[8244]: I0318 09:58:28.180223 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 09:58:28.181372 master-0 kubenswrapper[8244]: I0318 09:58:28.180546 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 09:58:28.197841 master-0 kubenswrapper[8244]: I0318 09:58:28.193505 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-886k6"] Mar 18 09:58:28.277015 master-0 kubenswrapper[8244]: I0318 09:58:28.276974 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-82tbk_43d54514-989c-4c82-93f9-153b44eacdd1/router/0.log" Mar 18 09:58:28.295447 master-0 kubenswrapper[8244]: I0318 09:58:28.295403 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9cfd2323-c33a-4d80-9c25-710920c0e605-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.295447 master-0 kubenswrapper[8244]: I0318 09:58:28.295452 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blfkg\" (UniqueName: \"kubernetes.io/projected/9cfd2323-c33a-4d80-9c25-710920c0e605-kube-api-access-blfkg\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.295713 master-0 kubenswrapper[8244]: I0318 09:58:28.295522 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.295713 master-0 kubenswrapper[8244]: I0318 09:58:28.295548 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.396636 master-0 kubenswrapper[8244]: I0318 09:58:28.396533 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.396636 master-0 kubenswrapper[8244]: I0318 09:58:28.396602 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.396858 master-0 kubenswrapper[8244]: I0318 09:58:28.396648 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9cfd2323-c33a-4d80-9c25-710920c0e605-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.396858 master-0 kubenswrapper[8244]: I0318 09:58:28.396671 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blfkg\" (UniqueName: \"kubernetes.io/projected/9cfd2323-c33a-4d80-9c25-710920c0e605-kube-api-access-blfkg\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.398224 master-0 kubenswrapper[8244]: I0318 09:58:28.397849 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9cfd2323-c33a-4d80-9c25-710920c0e605-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.400400 master-0 kubenswrapper[8244]: I0318 09:58:28.400364 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.400555 master-0 kubenswrapper[8244]: I0318 09:58:28.400519 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:28.400555 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:28.400555 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:28.400555 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:28.400661 master-0 kubenswrapper[8244]: I0318 09:58:28.400569 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:28.401638 master-0 kubenswrapper[8244]: I0318 09:58:28.401604 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.417722 master-0 kubenswrapper[8244]: I0318 09:58:28.417682 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blfkg\" (UniqueName: \"kubernetes.io/projected/9cfd2323-c33a-4d80-9c25-710920c0e605-kube-api-access-blfkg\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:28.475729 master-0 kubenswrapper[8244]: I0318 09:58:28.475690 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6d58f9cc86-7vcln_8b906fc0-f2bf-4586-97e6-921bbd467b65/fix-audit-permissions/0.log" Mar 18 09:58:28.511525 master-0 kubenswrapper[8244]: I0318 09:58:28.511450 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 09:58:29.067071 master-0 kubenswrapper[8244]: I0318 09:58:29.067013 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6d58f9cc86-7vcln_8b906fc0-f2bf-4586-97e6-921bbd467b65/oauth-apiserver/0.log" Mar 18 09:58:29.084849 master-0 kubenswrapper[8244]: I0318 09:58:29.082140 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/1.log" Mar 18 09:58:29.092275 master-0 kubenswrapper[8244]: I0318 09:58:29.087414 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/2.log" Mar 18 09:58:29.321953 master-0 kubenswrapper[8244]: I0318 09:58:29.321802 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/setup/0.log" Mar 18 09:58:29.405872 master-0 kubenswrapper[8244]: I0318 09:58:29.405761 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:29.405872 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:29.405872 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:29.405872 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:29.405872 master-0 kubenswrapper[8244]: I0318 09:58:29.405859 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:29.411328 master-0 kubenswrapper[8244]: I0318 09:58:29.411283 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-886k6"] Mar 18 09:58:29.417371 master-0 kubenswrapper[8244]: W0318 09:58:29.417329 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cfd2323_c33a_4d80_9c25_710920c0e605.slice/crio-15afbeaf2b91c3dde6de78ecc76cf185217127e7fd54f971970a9dc91ec72267 WatchSource:0}: Error finding container 15afbeaf2b91c3dde6de78ecc76cf185217127e7fd54f971970a9dc91ec72267: Status 404 returned error can't find the container with id 15afbeaf2b91c3dde6de78ecc76cf185217127e7fd54f971970a9dc91ec72267 Mar 18 09:58:29.476489 master-0 kubenswrapper[8244]: I0318 09:58:29.476447 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-ensure-env-vars/0.log" Mar 18 09:58:29.677037 master-0 kubenswrapper[8244]: I0318 09:58:29.676923 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-resources-copy/0.log" Mar 18 09:58:29.878595 master-0 kubenswrapper[8244]: I0318 09:58:29.878530 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 09:58:30.081257 master-0 kubenswrapper[8244]: I0318 09:58:30.081120 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 09:58:30.118180 master-0 kubenswrapper[8244]: I0318 09:58:30.118127 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" event={"ID":"9cfd2323-c33a-4d80-9c25-710920c0e605","Type":"ContainerStarted","Data":"15afbeaf2b91c3dde6de78ecc76cf185217127e7fd54f971970a9dc91ec72267"} Mar 18 09:58:30.403546 master-0 kubenswrapper[8244]: I0318 09:58:30.403410 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:30.403546 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:30.403546 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:30.403546 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:30.403771 master-0 kubenswrapper[8244]: I0318 09:58:30.403526 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:30.413063 master-0 kubenswrapper[8244]: I0318 09:58:30.412997 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 09:58:30.476560 master-0 kubenswrapper[8244]: I0318 09:58:30.476516 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-readyz/0.log" Mar 18 09:58:30.676805 master-0 kubenswrapper[8244]: I0318 09:58:30.676707 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 09:58:30.949572 master-0 kubenswrapper[8244]: I0318 09:58:30.949470 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_be8bd84c-8035-4bec-a725-b0ae89382c0f/installer/0.log" Mar 18 09:58:31.077718 master-0 kubenswrapper[8244]: I0318 09:58:31.077681 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-smghb_6a6a616d-012a-479e-ab3d-b21295ea1805/kube-apiserver-operator/1.log" Mar 18 09:58:31.402538 master-0 kubenswrapper[8244]: I0318 09:58:31.402432 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:31.402538 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:31.402538 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:31.402538 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:31.403480 master-0 kubenswrapper[8244]: I0318 09:58:31.402578 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:31.420183 master-0 kubenswrapper[8244]: I0318 09:58:31.420137 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-smghb_6a6a616d-012a-479e-ab3d-b21295ea1805/kube-apiserver-operator/2.log" Mar 18 09:58:31.814161 master-0 kubenswrapper[8244]: I0318 09:58:31.814106 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/setup/0.log" Mar 18 09:58:31.849342 master-0 kubenswrapper[8244]: I0318 09:58:31.848278 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver/0.log" Mar 18 09:58:31.875447 master-0 kubenswrapper[8244]: I0318 09:58:31.875389 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver-insecure-readyz/0.log" Mar 18 09:58:32.401804 master-0 kubenswrapper[8244]: I0318 09:58:32.401636 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:32.401804 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:32.401804 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:32.401804 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:32.401804 master-0 kubenswrapper[8244]: I0318 09:58:32.401748 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:32.464041 master-0 kubenswrapper[8244]: I0318 09:58:32.463954 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_5fb70bf3-93cd-4000-be1a-8e21846d5709/installer/0.log" Mar 18 09:58:33.399491 master-0 kubenswrapper[8244]: I0318 09:58:33.399445 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 09:58:33.401320 master-0 kubenswrapper[8244]: I0318 09:58:33.401300 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:33.401320 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:33.401320 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:33.401320 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:33.401507 master-0 kubenswrapper[8244]: I0318 09:58:33.401486 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:33.685462 master-0 kubenswrapper[8244]: I0318 09:58:33.685303 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_a4d7edd6-7975-468e-adea-138d92ed1be1/installer/0.log" Mar 18 09:58:34.401749 master-0 kubenswrapper[8244]: I0318 09:58:34.401692 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:34.401749 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:34.401749 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:34.401749 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:34.401749 master-0 kubenswrapper[8244]: I0318 09:58:34.401759 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:35.401081 master-0 kubenswrapper[8244]: I0318 09:58:35.400985 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:35.401081 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:35.401081 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:35.401081 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:35.401755 master-0 kubenswrapper[8244]: I0318 09:58:35.401137 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:36.113431 master-0 kubenswrapper[8244]: I0318 09:58:36.113349 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_b82be17f9a809bd5efbd88c0026e8713/kube-controller-manager/0.log" Mar 18 09:58:36.303398 master-0 kubenswrapper[8244]: I0318 09:58:36.300202 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_b82be17f9a809bd5efbd88c0026e8713/cluster-policy-controller/0.log" Mar 18 09:58:36.362885 master-0 kubenswrapper[8244]: I0318 09:58:36.361642 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_b82be17f9a809bd5efbd88c0026e8713/kube-controller-manager-cert-syncer/0.log" Mar 18 09:58:36.402229 master-0 kubenswrapper[8244]: I0318 09:58:36.402100 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:36.402229 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:36.402229 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:36.402229 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:36.402229 master-0 kubenswrapper[8244]: I0318 09:58:36.402194 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:36.811534 master-0 kubenswrapper[8244]: I0318 09:58:36.811353 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_b82be17f9a809bd5efbd88c0026e8713/kube-controller-manager-recovery-controller/0.log" Mar 18 09:58:37.401512 master-0 kubenswrapper[8244]: I0318 09:58:37.401462 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:37.401512 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:37.401512 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:37.401512 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:37.401926 master-0 kubenswrapper[8244]: I0318 09:58:37.401536 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:38.100633 master-0 kubenswrapper[8244]: I0318 09:58:38.094537 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-pzqqc_0999f781-3299-4cb6-ba76-2a4f4584c685/kube-controller-manager-operator/1.log" Mar 18 09:58:38.400967 master-0 kubenswrapper[8244]: I0318 09:58:38.400839 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:38.400967 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:38.400967 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:38.400967 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:38.400967 master-0 kubenswrapper[8244]: I0318 09:58:38.400909 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:39.401475 master-0 kubenswrapper[8244]: I0318 09:58:39.401427 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:39.401475 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:39.401475 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:39.401475 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:39.402071 master-0 kubenswrapper[8244]: I0318 09:58:39.401490 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:40.401300 master-0 kubenswrapper[8244]: I0318 09:58:40.401226 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:40.401300 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:40.401300 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:40.401300 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:40.402254 master-0 kubenswrapper[8244]: I0318 09:58:40.401305 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:41.402330 master-0 kubenswrapper[8244]: I0318 09:58:41.402186 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:41.402330 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:41.402330 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:41.402330 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:41.402330 master-0 kubenswrapper[8244]: I0318 09:58:41.402312 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:42.401946 master-0 kubenswrapper[8244]: I0318 09:58:42.401794 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:42.401946 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:42.401946 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:42.401946 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:42.402312 master-0 kubenswrapper[8244]: I0318 09:58:42.401967 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:43.401617 master-0 kubenswrapper[8244]: I0318 09:58:43.401553 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:43.401617 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:43.401617 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:43.401617 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:43.402256 master-0 kubenswrapper[8244]: I0318 09:58:43.401637 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:43.857007 master-0 kubenswrapper[8244]: I0318 09:58:43.856898 8244 scope.go:117] "RemoveContainer" containerID="a8a79bb9813c53d6a7944ac3a61efc1cc0406057f3915265e59c26643cc48a9e" Mar 18 09:58:44.401981 master-0 kubenswrapper[8244]: I0318 09:58:44.401888 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:44.401981 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:44.401981 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:44.401981 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:44.402582 master-0 kubenswrapper[8244]: I0318 09:58:44.401992 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:45.407709 master-0 kubenswrapper[8244]: I0318 09:58:45.407601 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:45.407709 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:45.407709 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:45.407709 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:45.408259 master-0 kubenswrapper[8244]: I0318 09:58:45.407783 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:45.584892 master-0 kubenswrapper[8244]: I0318 09:58:45.584760 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-pzqqc_0999f781-3299-4cb6-ba76-2a4f4584c685/kube-controller-manager-operator/2.log" Mar 18 09:58:45.609682 master-0 kubenswrapper[8244]: I0318 09:58:45.609609 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_54a208d1-afe8-49b5-92e0-e27afb4abc80/installer/0.log" Mar 18 09:58:45.624205 master-0 kubenswrapper[8244]: I0318 09:58:45.624103 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/wait-for-host-port/0.log" Mar 18 09:58:45.632510 master-0 kubenswrapper[8244]: I0318 09:58:45.632460 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 18 09:58:45.641638 master-0 kubenswrapper[8244]: I0318 09:58:45.641608 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 18 09:58:45.654883 master-0 kubenswrapper[8244]: I0318 09:58:45.654811 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-recovery-controller/0.log" Mar 18 09:58:45.671178 master-0 kubenswrapper[8244]: I0318 09:58:45.669203 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vj8tt_3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/kube-scheduler-operator-container/1.log" Mar 18 09:58:45.693440 master-0 kubenswrapper[8244]: I0318 09:58:45.693387 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vj8tt_3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/kube-scheduler-operator-container/2.log" Mar 18 09:58:45.713604 master-0 kubenswrapper[8244]: I0318 09:58:45.713556 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-zz68c_0d72e695-0183-4ee8-8add-5425e67f7138/openshift-apiserver-operator/1.log" Mar 18 09:58:45.724377 master-0 kubenswrapper[8244]: I0318 09:58:45.724333 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-zz68c_0d72e695-0183-4ee8-8add-5425e67f7138/openshift-apiserver-operator/2.log" Mar 18 09:58:45.731033 master-0 kubenswrapper[8244]: I0318 09:58:45.730970 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-687747fbb4-k7dnf_0c7b317c-d141-4e69-9c82-4a5dda6c3248/fix-audit-permissions/0.log" Mar 18 09:58:45.800210 master-0 kubenswrapper[8244]: I0318 09:58:45.800160 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-687747fbb4-k7dnf_0c7b317c-d141-4e69-9c82-4a5dda6c3248/openshift-apiserver/0.log" Mar 18 09:58:45.994005 master-0 kubenswrapper[8244]: I0318 09:58:45.993968 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-687747fbb4-k7dnf_0c7b317c-d141-4e69-9c82-4a5dda6c3248/openshift-apiserver-check-endpoints/0.log" Mar 18 09:58:46.194435 master-0 kubenswrapper[8244]: I0318 09:58:46.194334 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/1.log" Mar 18 09:58:46.225959 master-0 kubenswrapper[8244]: I0318 09:58:46.225910 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" event={"ID":"9cfd2323-c33a-4d80-9c25-710920c0e605","Type":"ContainerStarted","Data":"e4006827ef9d5cee97c670b32df9c77a221a18b851c2701cac40f71ffc1bb619"} Mar 18 09:58:46.225959 master-0 kubenswrapper[8244]: I0318 09:58:46.225961 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" event={"ID":"9cfd2323-c33a-4d80-9c25-710920c0e605","Type":"ContainerStarted","Data":"3ccc6e5faa573bd7d7c2c2920fd8723b794eab5af3d733f967703a83ced17434"} Mar 18 09:58:46.265096 master-0 kubenswrapper[8244]: I0318 09:58:46.265026 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" podStartSLOduration=2.042171685 podStartE2EDuration="18.265004881s" podCreationTimestamp="2026-03-18 09:58:28 +0000 UTC" firstStartedPulling="2026-03-18 09:58:29.419462733 +0000 UTC m=+225.899198861" lastFinishedPulling="2026-03-18 09:58:45.642295909 +0000 UTC m=+242.122032057" observedRunningTime="2026-03-18 09:58:46.261324851 +0000 UTC m=+242.741060989" watchObservedRunningTime="2026-03-18 09:58:46.265004881 +0000 UTC m=+242.744741009" Mar 18 09:58:46.396616 master-0 kubenswrapper[8244]: I0318 09:58:46.396568 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/2.log" Mar 18 09:58:46.402110 master-0 kubenswrapper[8244]: I0318 09:58:46.402062 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:46.402110 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:46.402110 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:46.402110 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:46.402323 master-0 kubenswrapper[8244]: I0318 09:58:46.402132 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:46.594785 master-0 kubenswrapper[8244]: I0318 09:58:46.594730 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-g25jq_3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/openshift-controller-manager-operator/1.log" Mar 18 09:58:46.792914 master-0 kubenswrapper[8244]: I0318 09:58:46.792620 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-g25jq_3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/openshift-controller-manager-operator/2.log" Mar 18 09:58:46.994029 master-0 kubenswrapper[8244]: I0318 09:58:46.993912 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-f8f5f6bc4-87dt7_54e26470-5ffb-4673-9375-e80031cc6750/controller-manager/0.log" Mar 18 09:58:47.197433 master-0 kubenswrapper[8244]: I0318 09:58:47.197380 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-f8f5f6bc4-87dt7_54e26470-5ffb-4673-9375-e80031cc6750/controller-manager/1.log" Mar 18 09:58:47.394655 master-0 kubenswrapper[8244]: I0318 09:58:47.394601 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-54cf6885f8-xsgcr_3a9c36d0-e3f3-441e-bbab-44703a0eeb70/route-controller-manager/0.log" Mar 18 09:58:47.401744 master-0 kubenswrapper[8244]: I0318 09:58:47.401694 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:47.401744 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:47.401744 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:47.401744 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:47.401946 master-0 kubenswrapper[8244]: I0318 09:58:47.401758 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:47.599527 master-0 kubenswrapper[8244]: I0318 09:58:47.599481 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-fhz5s_ee376320-9ca0-444d-ab37-9cbcb6729b11/catalog-operator/0.log" Mar 18 09:58:47.800431 master-0 kubenswrapper[8244]: I0318 09:58:47.800360 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5c9796789-hc74k_db52ca42-e458-407f-9eeb-bf6de6405edc/olm-operator/0.log" Mar 18 09:58:48.401221 master-0 kubenswrapper[8244]: I0318 09:58:48.401152 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:48.401221 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:48.401221 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:48.401221 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:48.401578 master-0 kubenswrapper[8244]: I0318 09:58:48.401247 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:48.505780 master-0 kubenswrapper[8244]: I0318 09:58:48.505713 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-r8fkv_d4d2218c-f9df-4d43-8727-ed3a920e23f7/kube-rbac-proxy/0.log" Mar 18 09:58:48.521627 master-0 kubenswrapper[8244]: I0318 09:58:48.521543 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-r8fkv_d4d2218c-f9df-4d43-8727-ed3a920e23f7/package-server-manager/0.log" Mar 18 09:58:48.536320 master-0 kubenswrapper[8244]: I0318 09:58:48.536259 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-7b64dcc66c-2vx58_bdf80ddc-7c99-4f60-814b-ba98809ef41d/packageserver/0.log" Mar 18 09:58:49.401745 master-0 kubenswrapper[8244]: I0318 09:58:49.401670 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:49.401745 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:49.401745 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:49.401745 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:49.402400 master-0 kubenswrapper[8244]: I0318 09:58:49.401775 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:49.580194 master-0 kubenswrapper[8244]: I0318 09:58:49.580150 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7"] Mar 18 09:58:49.581447 master-0 kubenswrapper[8244]: I0318 09:58:49.581416 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.583495 master-0 kubenswrapper[8244]: I0318 09:58:49.583388 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-ht56j" Mar 18 09:58:49.583575 master-0 kubenswrapper[8244]: I0318 09:58:49.583542 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 09:58:49.583748 master-0 kubenswrapper[8244]: I0318 09:58:49.583716 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 09:58:49.596104 master-0 kubenswrapper[8244]: I0318 09:58:49.596062 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-l9q9t"] Mar 18 09:58:49.597923 master-0 kubenswrapper[8244]: I0318 09:58:49.597891 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.602062 master-0 kubenswrapper[8244]: I0318 09:58:49.602020 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 09:58:49.602271 master-0 kubenswrapper[8244]: I0318 09:58:49.602234 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-4lcwf" Mar 18 09:58:49.602403 master-0 kubenswrapper[8244]: I0318 09:58:49.602369 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 09:58:49.613994 master-0 kubenswrapper[8244]: I0318 09:58:49.613940 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7"] Mar 18 09:58:49.615645 master-0 kubenswrapper[8244]: I0318 09:58:49.615614 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg"] Mar 18 09:58:49.618602 master-0 kubenswrapper[8244]: I0318 09:58:49.618561 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.623450 master-0 kubenswrapper[8244]: I0318 09:58:49.622354 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-pvxkh" Mar 18 09:58:49.623656 master-0 kubenswrapper[8244]: I0318 09:58:49.623630 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 09:58:49.623898 master-0 kubenswrapper[8244]: I0318 09:58:49.623848 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 09:58:49.623968 master-0 kubenswrapper[8244]: I0318 09:58:49.623906 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 09:58:49.649046 master-0 kubenswrapper[8244]: I0318 09:58:49.648990 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg"] Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.676734 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.676801 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-root\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.676893 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8jq\" (UniqueName: \"kubernetes.io/projected/1cb8ab19-0564-4182-a7e3-0943c1480663-kube-api-access-4v8jq\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.676930 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-textfile\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.676947 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-wtmp\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.676967 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cb8ab19-0564-4182-a7e3-0943c1480663-metrics-client-ca\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.676991 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-sys\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677010 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvcs\" (UniqueName: \"kubernetes.io/projected/af1bbeee-1faf-43d1-943f-ee5319cef4e9-kube-api-access-nkvcs\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677027 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677047 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677068 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtnxf\" (UniqueName: \"kubernetes.io/projected/5900a401-21c2-47f0-a921-47c648da558d-kube-api-access-qtnxf\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677084 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5900a401-21c2-47f0-a921-47c648da558d-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677106 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677133 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af1bbeee-1faf-43d1-943f-ee5319cef4e9-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677155 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677175 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677192 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.677939 master-0 kubenswrapper[8244]: I0318 09:58:49.677215 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.778060 master-0 kubenswrapper[8244]: I0318 09:58:49.777994 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.778060 master-0 kubenswrapper[8244]: I0318 09:58:49.778065 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af1bbeee-1faf-43d1-943f-ee5319cef4e9-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778105 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778130 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778148 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778168 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778192 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778212 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-root\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778235 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v8jq\" (UniqueName: \"kubernetes.io/projected/1cb8ab19-0564-4182-a7e3-0943c1480663-kube-api-access-4v8jq\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778260 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-textfile\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778278 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-wtmp\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778295 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cb8ab19-0564-4182-a7e3-0943c1480663-metrics-client-ca\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778321 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-sys\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778337 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkvcs\" (UniqueName: \"kubernetes.io/projected/af1bbeee-1faf-43d1-943f-ee5319cef4e9-kube-api-access-nkvcs\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778353 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778371 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778387 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtnxf\" (UniqueName: \"kubernetes.io/projected/5900a401-21c2-47f0-a921-47c648da558d-kube-api-access-qtnxf\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.778405 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5900a401-21c2-47f0-a921-47c648da558d-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: E0318 09:58:49.779601 8244 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: E0318 09:58:49.779697 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls podName:1cb8ab19-0564-4182-a7e3-0943c1480663 nodeName:}" failed. No retries permitted until 2026-03-18 09:58:50.279664203 +0000 UTC m=+246.759400421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls") pod "node-exporter-l9q9t" (UID: "1cb8ab19-0564-4182-a7e3-0943c1480663") : secret "node-exporter-tls" not found Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.779717 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-root\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.779887 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-textfile\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: E0318 09:58:49.779990 8244 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: E0318 09:58:49.780054 8244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls podName:5900a401-21c2-47f0-a921-47c648da558d nodeName:}" failed. No retries permitted until 2026-03-18 09:58:50.280030402 +0000 UTC m=+246.759766610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-8tbkg" (UID: "5900a401-21c2-47f0-a921-47c648da558d") : secret "kube-state-metrics-tls" not found Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.780335 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-wtmp\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.780424 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5900a401-21c2-47f0-a921-47c648da558d-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.780527 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-sys\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.780706 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af1bbeee-1faf-43d1-943f-ee5319cef4e9-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.780758 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cb8ab19-0564-4182-a7e3-0943c1480663-metrics-client-ca\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.781194 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.782393 master-0 kubenswrapper[8244]: I0318 09:58:49.781253 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.785467 master-0 kubenswrapper[8244]: I0318 09:58:49.782908 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.785467 master-0 kubenswrapper[8244]: I0318 09:58:49.783590 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.785467 master-0 kubenswrapper[8244]: I0318 09:58:49.784531 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.786259 master-0 kubenswrapper[8244]: I0318 09:58:49.786088 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.806340 master-0 kubenswrapper[8244]: I0318 09:58:49.806273 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v8jq\" (UniqueName: \"kubernetes.io/projected/1cb8ab19-0564-4182-a7e3-0943c1480663-kube-api-access-4v8jq\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:49.826814 master-0 kubenswrapper[8244]: I0318 09:58:49.826760 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkvcs\" (UniqueName: \"kubernetes.io/projected/af1bbeee-1faf-43d1-943f-ee5319cef4e9-kube-api-access-nkvcs\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:49.826930 master-0 kubenswrapper[8244]: I0318 09:58:49.826901 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtnxf\" (UniqueName: \"kubernetes.io/projected/5900a401-21c2-47f0-a921-47c648da558d-kube-api-access-qtnxf\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:49.897023 master-0 kubenswrapper[8244]: I0318 09:58:49.896974 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 09:58:50.284737 master-0 kubenswrapper[8244]: I0318 09:58:50.284633 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:50.285928 master-0 kubenswrapper[8244]: I0318 09:58:50.285886 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:50.289039 master-0 kubenswrapper[8244]: I0318 09:58:50.289002 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:50.292886 master-0 kubenswrapper[8244]: I0318 09:58:50.291018 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:50.306403 master-0 kubenswrapper[8244]: I0318 09:58:50.306171 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7"] Mar 18 09:58:50.314329 master-0 kubenswrapper[8244]: W0318 09:58:50.314274 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf1bbeee_1faf_43d1_943f_ee5319cef4e9.slice/crio-0a709f6a031857e3e4e56dda2c8a6cf2ebbad7bd036491c8c8d4d7ae887efd7b WatchSource:0}: Error finding container 0a709f6a031857e3e4e56dda2c8a6cf2ebbad7bd036491c8c8d4d7ae887efd7b: Status 404 returned error can't find the container with id 0a709f6a031857e3e4e56dda2c8a6cf2ebbad7bd036491c8c8d4d7ae887efd7b Mar 18 09:58:50.401072 master-0 kubenswrapper[8244]: I0318 09:58:50.400747 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:50.401072 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:50.401072 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:50.401072 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:50.401072 master-0 kubenswrapper[8244]: I0318 09:58:50.400808 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:50.517730 master-0 kubenswrapper[8244]: I0318 09:58:50.517661 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 09:58:50.544875 master-0 kubenswrapper[8244]: I0318 09:58:50.542447 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 09:58:50.547308 master-0 kubenswrapper[8244]: W0318 09:58:50.547251 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb8ab19_0564_4182_a7e3_0943c1480663.slice/crio-fc70fe385192b60cb00cc2ccd1eb9ea175a5eff153501a735cc786b1100d45a8 WatchSource:0}: Error finding container fc70fe385192b60cb00cc2ccd1eb9ea175a5eff153501a735cc786b1100d45a8: Status 404 returned error can't find the container with id fc70fe385192b60cb00cc2ccd1eb9ea175a5eff153501a735cc786b1100d45a8 Mar 18 09:58:50.980985 master-0 kubenswrapper[8244]: I0318 09:58:50.980746 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg"] Mar 18 09:58:51.267427 master-0 kubenswrapper[8244]: I0318 09:58:51.267290 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" event={"ID":"af1bbeee-1faf-43d1-943f-ee5319cef4e9","Type":"ContainerStarted","Data":"05c73c939863e91982f8c494e9919f8e40b17be1c7faa30638f716874df62b37"} Mar 18 09:58:51.267427 master-0 kubenswrapper[8244]: I0318 09:58:51.267342 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" event={"ID":"af1bbeee-1faf-43d1-943f-ee5319cef4e9","Type":"ContainerStarted","Data":"dd9a9c03e8d847e343b00a6167d8941789ea5e4fd2e9d73faffbe235226ecdf1"} Mar 18 09:58:51.267427 master-0 kubenswrapper[8244]: I0318 09:58:51.267353 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" event={"ID":"af1bbeee-1faf-43d1-943f-ee5319cef4e9","Type":"ContainerStarted","Data":"0a709f6a031857e3e4e56dda2c8a6cf2ebbad7bd036491c8c8d4d7ae887efd7b"} Mar 18 09:58:51.268640 master-0 kubenswrapper[8244]: I0318 09:58:51.268591 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l9q9t" event={"ID":"1cb8ab19-0564-4182-a7e3-0943c1480663","Type":"ContainerStarted","Data":"fc70fe385192b60cb00cc2ccd1eb9ea175a5eff153501a735cc786b1100d45a8"} Mar 18 09:58:51.269551 master-0 kubenswrapper[8244]: I0318 09:58:51.269510 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" event={"ID":"5900a401-21c2-47f0-a921-47c648da558d","Type":"ContainerStarted","Data":"5f264524ff7942903d23e39e84e002c2a4f349e860595476e5954b840e22c114"} Mar 18 09:58:51.401321 master-0 kubenswrapper[8244]: I0318 09:58:51.401274 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:51.401321 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:51.401321 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:51.401321 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:51.401623 master-0 kubenswrapper[8244]: I0318 09:58:51.401339 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:52.401067 master-0 kubenswrapper[8244]: I0318 09:58:52.401007 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:52.401067 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:52.401067 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:52.401067 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:52.401690 master-0 kubenswrapper[8244]: I0318 09:58:52.401079 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:53.280992 master-0 kubenswrapper[8244]: I0318 09:58:53.280935 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l9q9t" event={"ID":"1cb8ab19-0564-4182-a7e3-0943c1480663","Type":"ContainerStarted","Data":"56303ad5942aabce8c0f739f5e78ec830c4f13ce66a281475244962d17c4dbb4"} Mar 18 09:58:53.284404 master-0 kubenswrapper[8244]: I0318 09:58:53.284361 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" event={"ID":"5900a401-21c2-47f0-a921-47c648da558d","Type":"ContainerStarted","Data":"f4540e444af71438ad25c396198a3b1bfaee27f1b71f8e9c52b0cbc18612052b"} Mar 18 09:58:53.289336 master-0 kubenswrapper[8244]: I0318 09:58:53.289306 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" event={"ID":"af1bbeee-1faf-43d1-943f-ee5319cef4e9","Type":"ContainerStarted","Data":"351ca628f2e1ca5bfe77999fc8d23d32ed472fcdc4f3592a27f7cb80a39903d9"} Mar 18 09:58:53.341780 master-0 kubenswrapper[8244]: I0318 09:58:53.341707 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" podStartSLOduration=2.033264167 podStartE2EDuration="4.341693041s" podCreationTimestamp="2026-03-18 09:58:49 +0000 UTC" firstStartedPulling="2026-03-18 09:58:50.58716175 +0000 UTC m=+247.066897878" lastFinishedPulling="2026-03-18 09:58:52.895590594 +0000 UTC m=+249.375326752" observedRunningTime="2026-03-18 09:58:53.337456528 +0000 UTC m=+249.817192656" watchObservedRunningTime="2026-03-18 09:58:53.341693041 +0000 UTC m=+249.821429169" Mar 18 09:58:53.400238 master-0 kubenswrapper[8244]: I0318 09:58:53.400195 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:53.400238 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:53.400238 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:53.400238 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:53.400461 master-0 kubenswrapper[8244]: I0318 09:58:53.400251 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:54.298626 master-0 kubenswrapper[8244]: I0318 09:58:54.298550 8244 generic.go:334] "Generic (PLEG): container finished" podID="1cb8ab19-0564-4182-a7e3-0943c1480663" containerID="56303ad5942aabce8c0f739f5e78ec830c4f13ce66a281475244962d17c4dbb4" exitCode=0 Mar 18 09:58:54.299662 master-0 kubenswrapper[8244]: I0318 09:58:54.298606 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l9q9t" event={"ID":"1cb8ab19-0564-4182-a7e3-0943c1480663","Type":"ContainerDied","Data":"56303ad5942aabce8c0f739f5e78ec830c4f13ce66a281475244962d17c4dbb4"} Mar 18 09:58:54.301298 master-0 kubenswrapper[8244]: I0318 09:58:54.301246 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" event={"ID":"5900a401-21c2-47f0-a921-47c648da558d","Type":"ContainerStarted","Data":"a8777102608cdb2214f01609e754267894b2b04bd3a724e4d319dd76cfcb1bcc"} Mar 18 09:58:54.301298 master-0 kubenswrapper[8244]: I0318 09:58:54.301298 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" event={"ID":"5900a401-21c2-47f0-a921-47c648da558d","Type":"ContainerStarted","Data":"473ec442fb960069766eab4bd2d494e54c03b767097ae6f1e6fa7dcfaaa9c435"} Mar 18 09:58:54.361090 master-0 kubenswrapper[8244]: I0318 09:58:54.360997 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" podStartSLOduration=3.471109763 podStartE2EDuration="5.360976202s" podCreationTimestamp="2026-03-18 09:58:49 +0000 UTC" firstStartedPulling="2026-03-18 09:58:50.998107349 +0000 UTC m=+247.477843477" lastFinishedPulling="2026-03-18 09:58:52.887973758 +0000 UTC m=+249.367709916" observedRunningTime="2026-03-18 09:58:54.357877797 +0000 UTC m=+250.837613945" watchObservedRunningTime="2026-03-18 09:58:54.360976202 +0000 UTC m=+250.840712330" Mar 18 09:58:54.401965 master-0 kubenswrapper[8244]: I0318 09:58:54.401896 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:54.401965 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:54.401965 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:54.401965 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:54.402343 master-0 kubenswrapper[8244]: I0318 09:58:54.401967 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:55.131315 master-0 kubenswrapper[8244]: I0318 09:58:55.131208 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-74c475bc87-xx98m"] Mar 18 09:58:55.132455 master-0 kubenswrapper[8244]: I0318 09:58:55.132394 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.141479 master-0 kubenswrapper[8244]: I0318 09:58:55.141345 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-t4btg" Mar 18 09:58:55.141781 master-0 kubenswrapper[8244]: I0318 09:58:55.141485 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 09:58:55.141781 master-0 kubenswrapper[8244]: I0318 09:58:55.141529 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-113q5nsjog6km" Mar 18 09:58:55.141781 master-0 kubenswrapper[8244]: I0318 09:58:55.141666 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 09:58:55.141781 master-0 kubenswrapper[8244]: I0318 09:58:55.141766 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 09:58:55.142229 master-0 kubenswrapper[8244]: I0318 09:58:55.142084 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 09:58:55.151609 master-0 kubenswrapper[8244]: I0318 09:58:55.151086 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-74c475bc87-xx98m"] Mar 18 09:58:55.269328 master-0 kubenswrapper[8244]: I0318 09:58:55.269253 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.269328 master-0 kubenswrapper[8244]: I0318 09:58:55.269313 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqx6m\" (UniqueName: \"kubernetes.io/projected/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-kube-api-access-fqx6m\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.269578 master-0 kubenswrapper[8244]: I0318 09:58:55.269357 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.269578 master-0 kubenswrapper[8244]: I0318 09:58:55.269422 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.269578 master-0 kubenswrapper[8244]: I0318 09:58:55.269484 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-audit-log\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.269578 master-0 kubenswrapper[8244]: I0318 09:58:55.269514 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.269578 master-0 kubenswrapper[8244]: I0318 09:58:55.269565 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.315528 master-0 kubenswrapper[8244]: I0318 09:58:55.315454 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l9q9t" event={"ID":"1cb8ab19-0564-4182-a7e3-0943c1480663","Type":"ContainerStarted","Data":"04f1a45a584d0042afc4976cdec5c5152a5648206c2705f5128605b1d34f5082"} Mar 18 09:58:55.316044 master-0 kubenswrapper[8244]: I0318 09:58:55.315525 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-l9q9t" event={"ID":"1cb8ab19-0564-4182-a7e3-0943c1480663","Type":"ContainerStarted","Data":"68ee63857a29373a1a973ef1766c8813345ae10530c3a3c0e057b502aa38855c"} Mar 18 09:58:55.351531 master-0 kubenswrapper[8244]: I0318 09:58:55.351410 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-l9q9t" podStartSLOduration=4.032944079 podStartE2EDuration="6.351389208s" podCreationTimestamp="2026-03-18 09:58:49 +0000 UTC" firstStartedPulling="2026-03-18 09:58:50.57122125 +0000 UTC m=+247.050957378" lastFinishedPulling="2026-03-18 09:58:52.889666349 +0000 UTC m=+249.369402507" observedRunningTime="2026-03-18 09:58:55.349317117 +0000 UTC m=+251.829053255" watchObservedRunningTime="2026-03-18 09:58:55.351389208 +0000 UTC m=+251.831125346" Mar 18 09:58:55.371394 master-0 kubenswrapper[8244]: I0318 09:58:55.371268 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.371927 master-0 kubenswrapper[8244]: I0318 09:58:55.371884 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-audit-log\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.372404 master-0 kubenswrapper[8244]: I0318 09:58:55.372359 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.372524 master-0 kubenswrapper[8244]: I0318 09:58:55.372426 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.372524 master-0 kubenswrapper[8244]: I0318 09:58:55.372520 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.372663 master-0 kubenswrapper[8244]: I0318 09:58:55.372631 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.372764 master-0 kubenswrapper[8244]: I0318 09:58:55.372685 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqx6m\" (UniqueName: \"kubernetes.io/projected/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-kube-api-access-fqx6m\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.372866 master-0 kubenswrapper[8244]: I0318 09:58:55.372808 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.374091 master-0 kubenswrapper[8244]: I0318 09:58:55.373633 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-audit-log\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.374349 master-0 kubenswrapper[8244]: I0318 09:58:55.374259 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.378804 master-0 kubenswrapper[8244]: I0318 09:58:55.378747 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.385030 master-0 kubenswrapper[8244]: I0318 09:58:55.384968 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.385561 master-0 kubenswrapper[8244]: I0318 09:58:55.385498 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.390319 master-0 kubenswrapper[8244]: I0318 09:58:55.390254 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqx6m\" (UniqueName: \"kubernetes.io/projected/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-kube-api-access-fqx6m\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.401527 master-0 kubenswrapper[8244]: I0318 09:58:55.401438 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:55.401527 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:55.401527 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:55.401527 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:55.402024 master-0 kubenswrapper[8244]: I0318 09:58:55.401525 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:55.463184 master-0 kubenswrapper[8244]: I0318 09:58:55.463113 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:58:55.921178 master-0 kubenswrapper[8244]: I0318 09:58:55.921120 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-74c475bc87-xx98m"] Mar 18 09:58:55.925267 master-0 kubenswrapper[8244]: W0318 09:58:55.925204 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod106fc2a2_9e7b_4f86_94b8_b1a1906646d8.slice/crio-adda5560398a1e9cd1248ce8d3ae8608ee224ce0ee349c65f7682b313879aa78 WatchSource:0}: Error finding container adda5560398a1e9cd1248ce8d3ae8608ee224ce0ee349c65f7682b313879aa78: Status 404 returned error can't find the container with id adda5560398a1e9cd1248ce8d3ae8608ee224ce0ee349c65f7682b313879aa78 Mar 18 09:58:56.329308 master-0 kubenswrapper[8244]: I0318 09:58:56.326870 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" event={"ID":"106fc2a2-9e7b-4f86-94b8-b1a1906646d8","Type":"ContainerStarted","Data":"adda5560398a1e9cd1248ce8d3ae8608ee224ce0ee349c65f7682b313879aa78"} Mar 18 09:58:56.401062 master-0 kubenswrapper[8244]: I0318 09:58:56.400994 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:56.401062 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:56.401062 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:56.401062 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:56.401405 master-0 kubenswrapper[8244]: I0318 09:58:56.401073 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:57.403550 master-0 kubenswrapper[8244]: I0318 09:58:57.403396 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:57.403550 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:57.403550 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:57.403550 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:57.403550 master-0 kubenswrapper[8244]: I0318 09:58:57.403459 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:58.339947 master-0 kubenswrapper[8244]: I0318 09:58:58.339868 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" event={"ID":"106fc2a2-9e7b-4f86-94b8-b1a1906646d8","Type":"ContainerStarted","Data":"aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0"} Mar 18 09:58:58.358347 master-0 kubenswrapper[8244]: I0318 09:58:58.358246 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" podStartSLOduration=1.750624212 podStartE2EDuration="3.358228584s" podCreationTimestamp="2026-03-18 09:58:55 +0000 UTC" firstStartedPulling="2026-03-18 09:58:55.928591289 +0000 UTC m=+252.408327417" lastFinishedPulling="2026-03-18 09:58:57.536195661 +0000 UTC m=+254.015931789" observedRunningTime="2026-03-18 09:58:58.356281376 +0000 UTC m=+254.836017504" watchObservedRunningTime="2026-03-18 09:58:58.358228584 +0000 UTC m=+254.837964722" Mar 18 09:58:58.401086 master-0 kubenswrapper[8244]: I0318 09:58:58.400988 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:58.401086 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:58.401086 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:58.401086 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:58.401086 master-0 kubenswrapper[8244]: I0318 09:58:58.401056 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:58:59.401067 master-0 kubenswrapper[8244]: I0318 09:58:59.400986 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:58:59.401067 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:58:59.401067 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:58:59.401067 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:58:59.401067 master-0 kubenswrapper[8244]: I0318 09:58:59.401060 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:00.401179 master-0 kubenswrapper[8244]: I0318 09:59:00.401070 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:00.401179 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:00.401179 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:00.401179 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:00.401179 master-0 kubenswrapper[8244]: I0318 09:59:00.401160 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:01.401617 master-0 kubenswrapper[8244]: I0318 09:59:01.401543 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:01.401617 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:01.401617 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:01.401617 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:01.402616 master-0 kubenswrapper[8244]: I0318 09:59:01.401629 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:02.402617 master-0 kubenswrapper[8244]: I0318 09:59:02.402530 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:02.402617 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:02.402617 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:02.402617 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:02.403309 master-0 kubenswrapper[8244]: I0318 09:59:02.402629 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:03.402401 master-0 kubenswrapper[8244]: I0318 09:59:03.402329 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:03.402401 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:03.402401 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:03.402401 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:03.402401 master-0 kubenswrapper[8244]: I0318 09:59:03.402400 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:04.400972 master-0 kubenswrapper[8244]: I0318 09:59:04.400876 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:04.400972 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:04.400972 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:04.400972 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:04.401316 master-0 kubenswrapper[8244]: I0318 09:59:04.400970 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:05.401756 master-0 kubenswrapper[8244]: I0318 09:59:05.401517 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:05.401756 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:05.401756 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:05.401756 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:05.402569 master-0 kubenswrapper[8244]: I0318 09:59:05.401789 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:06.400988 master-0 kubenswrapper[8244]: I0318 09:59:06.400877 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:06.400988 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:06.400988 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:06.400988 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:06.400988 master-0 kubenswrapper[8244]: I0318 09:59:06.400940 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:07.402482 master-0 kubenswrapper[8244]: I0318 09:59:07.402401 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:07.402482 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:07.402482 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:07.402482 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:07.403236 master-0 kubenswrapper[8244]: I0318 09:59:07.402480 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:08.401920 master-0 kubenswrapper[8244]: I0318 09:59:08.401793 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:08.401920 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:08.401920 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:08.401920 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:08.401920 master-0 kubenswrapper[8244]: I0318 09:59:08.401882 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:09.401876 master-0 kubenswrapper[8244]: I0318 09:59:09.401773 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:09.401876 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:09.401876 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:09.401876 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:09.403059 master-0 kubenswrapper[8244]: I0318 09:59:09.401888 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:10.401598 master-0 kubenswrapper[8244]: I0318 09:59:10.401539 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:10.401598 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:10.401598 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:10.401598 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:10.401943 master-0 kubenswrapper[8244]: I0318 09:59:10.401600 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:11.402480 master-0 kubenswrapper[8244]: I0318 09:59:11.402379 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:11.402480 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:11.402480 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:11.402480 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:11.403771 master-0 kubenswrapper[8244]: I0318 09:59:11.402489 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:12.401133 master-0 kubenswrapper[8244]: I0318 09:59:12.401064 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:12.401133 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:12.401133 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:12.401133 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:12.401396 master-0 kubenswrapper[8244]: I0318 09:59:12.401165 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:13.402623 master-0 kubenswrapper[8244]: I0318 09:59:13.402514 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:13.402623 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:13.402623 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:13.402623 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:13.402623 master-0 kubenswrapper[8244]: I0318 09:59:13.402607 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:14.401680 master-0 kubenswrapper[8244]: I0318 09:59:14.401605 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:14.401680 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:14.401680 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:14.401680 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:14.401680 master-0 kubenswrapper[8244]: I0318 09:59:14.401669 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:15.401451 master-0 kubenswrapper[8244]: I0318 09:59:15.401092 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:15.401451 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:15.401451 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:15.401451 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:15.401451 master-0 kubenswrapper[8244]: I0318 09:59:15.401192 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:15.463572 master-0 kubenswrapper[8244]: I0318 09:59:15.463502 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:59:15.463770 master-0 kubenswrapper[8244]: I0318 09:59:15.463622 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:59:16.401965 master-0 kubenswrapper[8244]: I0318 09:59:16.401894 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:16.401965 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:16.401965 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:16.401965 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:16.402621 master-0 kubenswrapper[8244]: I0318 09:59:16.401995 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:17.402269 master-0 kubenswrapper[8244]: I0318 09:59:17.402191 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:17.402269 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:17.402269 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:17.402269 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:17.403398 master-0 kubenswrapper[8244]: I0318 09:59:17.402273 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:18.402134 master-0 kubenswrapper[8244]: I0318 09:59:18.402050 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:18.402134 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:18.402134 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:18.402134 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:18.403147 master-0 kubenswrapper[8244]: I0318 09:59:18.402146 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:19.401178 master-0 kubenswrapper[8244]: I0318 09:59:19.401112 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:19.401178 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:19.401178 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:19.401178 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:19.402116 master-0 kubenswrapper[8244]: I0318 09:59:19.401189 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:20.401899 master-0 kubenswrapper[8244]: I0318 09:59:20.401687 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:20.401899 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:20.401899 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:20.401899 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:20.402905 master-0 kubenswrapper[8244]: I0318 09:59:20.401806 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:20.513367 master-0 kubenswrapper[8244]: I0318 09:59:20.512357 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/1.log" Mar 18 09:59:20.513957 master-0 kubenswrapper[8244]: I0318 09:59:20.513922 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/0.log" Mar 18 09:59:20.514095 master-0 kubenswrapper[8244]: I0318 09:59:20.513988 8244 generic.go:334] "Generic (PLEG): container finished" podID="accc57fb-75f5-4f89-9804-6ede7f77e27c" containerID="8be1e41fb91899198366216500a2564664d7ef8ef90cbe9f4c1e19358a42df09" exitCode=1 Mar 18 09:59:20.514095 master-0 kubenswrapper[8244]: I0318 09:59:20.514029 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerDied","Data":"8be1e41fb91899198366216500a2564664d7ef8ef90cbe9f4c1e19358a42df09"} Mar 18 09:59:20.514095 master-0 kubenswrapper[8244]: I0318 09:59:20.514076 8244 scope.go:117] "RemoveContainer" containerID="206825c3b2d516109311b9ec6547c75a5e9979c7b55c567cf556284de0799148" Mar 18 09:59:20.514970 master-0 kubenswrapper[8244]: I0318 09:59:20.514794 8244 scope.go:117] "RemoveContainer" containerID="8be1e41fb91899198366216500a2564664d7ef8ef90cbe9f4c1e19358a42df09" Mar 18 09:59:20.515545 master-0 kubenswrapper[8244]: E0318 09:59:20.515162 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 09:59:21.401847 master-0 kubenswrapper[8244]: I0318 09:59:21.401771 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:21.401847 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:21.401847 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:21.401847 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:21.402430 master-0 kubenswrapper[8244]: I0318 09:59:21.401892 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:21.523793 master-0 kubenswrapper[8244]: I0318 09:59:21.523731 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/1.log" Mar 18 09:59:22.401304 master-0 kubenswrapper[8244]: I0318 09:59:22.401247 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:22.401304 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:22.401304 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:22.401304 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:22.401746 master-0 kubenswrapper[8244]: I0318 09:59:22.401320 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:23.400963 master-0 kubenswrapper[8244]: I0318 09:59:23.400878 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:23.400963 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:23.400963 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:23.400963 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:23.401671 master-0 kubenswrapper[8244]: I0318 09:59:23.400981 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:24.401984 master-0 kubenswrapper[8244]: I0318 09:59:24.401885 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:24.401984 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:24.401984 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:24.401984 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:24.401984 master-0 kubenswrapper[8244]: I0318 09:59:24.401982 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:25.402250 master-0 kubenswrapper[8244]: I0318 09:59:25.402173 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:25.402250 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:25.402250 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:25.402250 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:25.403299 master-0 kubenswrapper[8244]: I0318 09:59:25.402257 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:26.402110 master-0 kubenswrapper[8244]: I0318 09:59:26.401978 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:26.402110 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:26.402110 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:26.402110 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:26.402110 master-0 kubenswrapper[8244]: I0318 09:59:26.402091 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:27.401591 master-0 kubenswrapper[8244]: I0318 09:59:27.401539 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:27.401591 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:27.401591 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:27.401591 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:27.402023 master-0 kubenswrapper[8244]: I0318 09:59:27.401990 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:28.401724 master-0 kubenswrapper[8244]: I0318 09:59:28.401649 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:28.401724 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:28.401724 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:28.401724 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:28.402705 master-0 kubenswrapper[8244]: I0318 09:59:28.401736 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:29.401450 master-0 kubenswrapper[8244]: I0318 09:59:29.401396 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:29.401450 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:29.401450 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:29.401450 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:29.402516 master-0 kubenswrapper[8244]: I0318 09:59:29.402471 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:30.402094 master-0 kubenswrapper[8244]: I0318 09:59:30.402039 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:30.402094 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:30.402094 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:30.402094 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:30.402758 master-0 kubenswrapper[8244]: I0318 09:59:30.402725 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:31.402203 master-0 kubenswrapper[8244]: I0318 09:59:31.402141 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:31.402203 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:31.402203 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:31.402203 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:31.403252 master-0 kubenswrapper[8244]: I0318 09:59:31.403207 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:32.400892 master-0 kubenswrapper[8244]: I0318 09:59:32.400782 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:32.400892 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:32.400892 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:32.400892 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:32.400892 master-0 kubenswrapper[8244]: I0318 09:59:32.400881 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:33.401121 master-0 kubenswrapper[8244]: I0318 09:59:33.401071 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:33.401121 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:33.401121 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:33.401121 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:33.401748 master-0 kubenswrapper[8244]: I0318 09:59:33.401144 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:34.401801 master-0 kubenswrapper[8244]: I0318 09:59:34.401724 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:34.401801 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:34.401801 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:34.401801 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:34.401801 master-0 kubenswrapper[8244]: I0318 09:59:34.401800 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:34.733148 master-0 kubenswrapper[8244]: I0318 09:59:34.732974 8244 scope.go:117] "RemoveContainer" containerID="8be1e41fb91899198366216500a2564664d7ef8ef90cbe9f4c1e19358a42df09" Mar 18 09:59:35.401921 master-0 kubenswrapper[8244]: I0318 09:59:35.401805 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:35.401921 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:35.401921 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:35.401921 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:35.401921 master-0 kubenswrapper[8244]: I0318 09:59:35.401890 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:35.472870 master-0 kubenswrapper[8244]: I0318 09:59:35.471229 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:59:35.477031 master-0 kubenswrapper[8244]: I0318 09:59:35.476970 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 09:59:35.628190 master-0 kubenswrapper[8244]: I0318 09:59:35.628095 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/1.log" Mar 18 09:59:35.629082 master-0 kubenswrapper[8244]: I0318 09:59:35.629061 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerStarted","Data":"0d30b4f631b8eb9dde0a0925230da53e5145662b1505b3eb3b7912145bc9b9d7"} Mar 18 09:59:36.402330 master-0 kubenswrapper[8244]: I0318 09:59:36.402260 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:36.402330 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:36.402330 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:36.402330 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:36.403292 master-0 kubenswrapper[8244]: I0318 09:59:36.402362 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:37.402068 master-0 kubenswrapper[8244]: I0318 09:59:37.401981 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:37.402068 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:37.402068 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:37.402068 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:37.403072 master-0 kubenswrapper[8244]: I0318 09:59:37.402074 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:38.401194 master-0 kubenswrapper[8244]: I0318 09:59:38.401124 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:38.401194 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:38.401194 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:38.401194 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:38.401694 master-0 kubenswrapper[8244]: I0318 09:59:38.401215 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:39.402336 master-0 kubenswrapper[8244]: I0318 09:59:39.402260 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:39.402336 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:39.402336 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:39.402336 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:39.403320 master-0 kubenswrapper[8244]: I0318 09:59:39.402353 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:40.401657 master-0 kubenswrapper[8244]: I0318 09:59:40.401579 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:40.401657 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:40.401657 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:40.401657 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:40.402297 master-0 kubenswrapper[8244]: I0318 09:59:40.401676 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:41.401337 master-0 kubenswrapper[8244]: I0318 09:59:41.401237 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:41.401337 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:41.401337 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:41.401337 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:41.402369 master-0 kubenswrapper[8244]: I0318 09:59:41.401350 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:42.402296 master-0 kubenswrapper[8244]: I0318 09:59:42.402215 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:42.402296 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:42.402296 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:42.402296 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:42.402296 master-0 kubenswrapper[8244]: I0318 09:59:42.402285 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:43.401147 master-0 kubenswrapper[8244]: I0318 09:59:43.401009 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:43.401147 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:43.401147 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:43.401147 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:43.401147 master-0 kubenswrapper[8244]: I0318 09:59:43.401142 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:43.907528 master-0 kubenswrapper[8244]: I0318 09:59:43.907428 8244 scope.go:117] "RemoveContainer" containerID="bd008f41fdcd1da5525afb4e170a05e1a1f3c337467181cdcfc21b203b5549da" Mar 18 09:59:44.402316 master-0 kubenswrapper[8244]: I0318 09:59:44.402221 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:44.402316 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:44.402316 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:44.402316 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:44.402745 master-0 kubenswrapper[8244]: I0318 09:59:44.402314 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:45.402794 master-0 kubenswrapper[8244]: I0318 09:59:45.402697 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:45.402794 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:45.402794 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:45.402794 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:45.404033 master-0 kubenswrapper[8244]: I0318 09:59:45.402796 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:46.401531 master-0 kubenswrapper[8244]: I0318 09:59:46.401472 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:46.401531 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:46.401531 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:46.401531 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:46.402281 master-0 kubenswrapper[8244]: I0318 09:59:46.401558 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:47.402372 master-0 kubenswrapper[8244]: I0318 09:59:47.402308 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:47.402372 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:47.402372 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:47.402372 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:47.402966 master-0 kubenswrapper[8244]: I0318 09:59:47.402399 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:48.401798 master-0 kubenswrapper[8244]: I0318 09:59:48.401677 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:48.401798 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:48.401798 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:48.401798 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:48.401798 master-0 kubenswrapper[8244]: I0318 09:59:48.401757 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:49.402129 master-0 kubenswrapper[8244]: I0318 09:59:49.402013 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:49.402129 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:49.402129 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:49.402129 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:49.402129 master-0 kubenswrapper[8244]: I0318 09:59:49.402109 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:50.400736 master-0 kubenswrapper[8244]: I0318 09:59:50.400666 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:50.400736 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:50.400736 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:50.400736 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:50.400736 master-0 kubenswrapper[8244]: I0318 09:59:50.400727 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:51.401054 master-0 kubenswrapper[8244]: I0318 09:59:51.400800 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:51.401054 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:51.401054 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:51.401054 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:51.401617 master-0 kubenswrapper[8244]: I0318 09:59:51.401056 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:52.400672 master-0 kubenswrapper[8244]: I0318 09:59:52.400581 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:52.400672 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:52.400672 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:52.400672 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:52.401882 master-0 kubenswrapper[8244]: I0318 09:59:52.400696 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:53.401213 master-0 kubenswrapper[8244]: I0318 09:59:53.401061 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:53.401213 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:53.401213 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:53.401213 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:53.401213 master-0 kubenswrapper[8244]: I0318 09:59:53.401169 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:54.401652 master-0 kubenswrapper[8244]: I0318 09:59:54.401532 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:54.401652 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:54.401652 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:54.401652 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:54.401652 master-0 kubenswrapper[8244]: I0318 09:59:54.401624 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:55.407049 master-0 kubenswrapper[8244]: I0318 09:59:55.406940 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:55.407049 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:55.407049 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:55.407049 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:55.407049 master-0 kubenswrapper[8244]: I0318 09:59:55.407044 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:56.401439 master-0 kubenswrapper[8244]: I0318 09:59:56.401349 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:56.401439 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:56.401439 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:56.401439 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:56.401439 master-0 kubenswrapper[8244]: I0318 09:59:56.401423 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:57.401383 master-0 kubenswrapper[8244]: I0318 09:59:57.401309 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:57.401383 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:57.401383 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:57.401383 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:57.401383 master-0 kubenswrapper[8244]: I0318 09:59:57.401377 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:58.401716 master-0 kubenswrapper[8244]: I0318 09:59:58.401635 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:58.401716 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:58.401716 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:58.401716 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:58.403124 master-0 kubenswrapper[8244]: I0318 09:59:58.401743 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:59:59.402358 master-0 kubenswrapper[8244]: I0318 09:59:59.402243 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:59:59.402358 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 09:59:59.402358 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 09:59:59.402358 master-0 kubenswrapper[8244]: healthz check failed Mar 18 09:59:59.402358 master-0 kubenswrapper[8244]: I0318 09:59:59.402340 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:00.402437 master-0 kubenswrapper[8244]: I0318 10:00:00.402363 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:00.402437 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:00.402437 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:00.402437 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:00.403509 master-0 kubenswrapper[8244]: I0318 10:00:00.402452 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:01.402627 master-0 kubenswrapper[8244]: I0318 10:00:01.402524 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:01.402627 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:01.402627 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:01.402627 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:01.403676 master-0 kubenswrapper[8244]: I0318 10:00:01.402638 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:02.401675 master-0 kubenswrapper[8244]: I0318 10:00:02.401576 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:02.401675 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:02.401675 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:02.401675 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:02.401675 master-0 kubenswrapper[8244]: I0318 10:00:02.401664 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:03.401310 master-0 kubenswrapper[8244]: I0318 10:00:03.401226 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:03.401310 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:03.401310 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:03.401310 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:03.401994 master-0 kubenswrapper[8244]: I0318 10:00:03.401327 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:04.402393 master-0 kubenswrapper[8244]: I0318 10:00:04.402287 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:04.402393 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:04.402393 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:04.402393 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:04.403625 master-0 kubenswrapper[8244]: I0318 10:00:04.402397 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:05.402558 master-0 kubenswrapper[8244]: I0318 10:00:05.402083 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:05.402558 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:05.402558 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:05.402558 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:05.402558 master-0 kubenswrapper[8244]: I0318 10:00:05.402162 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:06.402061 master-0 kubenswrapper[8244]: I0318 10:00:06.401962 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:06.402061 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:06.402061 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:06.402061 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:06.402061 master-0 kubenswrapper[8244]: I0318 10:00:06.402051 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:07.402187 master-0 kubenswrapper[8244]: I0318 10:00:07.402122 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:07.402187 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:07.402187 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:07.402187 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:07.403233 master-0 kubenswrapper[8244]: I0318 10:00:07.402207 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:08.402328 master-0 kubenswrapper[8244]: I0318 10:00:08.402222 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:08.402328 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:08.402328 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:08.402328 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:08.403033 master-0 kubenswrapper[8244]: I0318 10:00:08.402354 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:09.401963 master-0 kubenswrapper[8244]: I0318 10:00:09.401651 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:09.401963 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:09.401963 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:09.401963 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:09.402427 master-0 kubenswrapper[8244]: I0318 10:00:09.401967 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:10.401804 master-0 kubenswrapper[8244]: I0318 10:00:10.401644 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:10.401804 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:10.401804 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:10.401804 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:10.401804 master-0 kubenswrapper[8244]: I0318 10:00:10.401791 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:11.402165 master-0 kubenswrapper[8244]: I0318 10:00:11.402093 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:11.402165 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:11.402165 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:11.402165 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:11.402959 master-0 kubenswrapper[8244]: I0318 10:00:11.402164 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:12.403110 master-0 kubenswrapper[8244]: I0318 10:00:12.402993 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:12.403110 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:12.403110 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:12.403110 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:12.404197 master-0 kubenswrapper[8244]: I0318 10:00:12.403115 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:13.401848 master-0 kubenswrapper[8244]: I0318 10:00:13.401762 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:13.401848 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:13.401848 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:13.401848 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:13.402272 master-0 kubenswrapper[8244]: I0318 10:00:13.401883 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:14.401684 master-0 kubenswrapper[8244]: I0318 10:00:14.401594 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:14.401684 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:14.401684 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:14.401684 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:14.402718 master-0 kubenswrapper[8244]: I0318 10:00:14.401682 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:15.404909 master-0 kubenswrapper[8244]: I0318 10:00:15.404002 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:15.404909 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:15.404909 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:15.404909 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:15.404909 master-0 kubenswrapper[8244]: I0318 10:00:15.404142 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:16.401916 master-0 kubenswrapper[8244]: I0318 10:00:16.401682 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:16.401916 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:16.401916 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:16.401916 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:16.401916 master-0 kubenswrapper[8244]: I0318 10:00:16.401791 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:17.401332 master-0 kubenswrapper[8244]: I0318 10:00:17.401248 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:17.401332 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:17.401332 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:17.401332 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:17.402376 master-0 kubenswrapper[8244]: I0318 10:00:17.401340 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:18.401925 master-0 kubenswrapper[8244]: I0318 10:00:18.401817 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:18.401925 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:18.401925 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:18.401925 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:18.403021 master-0 kubenswrapper[8244]: I0318 10:00:18.401941 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:19.402417 master-0 kubenswrapper[8244]: I0318 10:00:19.402345 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:19.402417 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:19.402417 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:19.402417 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:19.403360 master-0 kubenswrapper[8244]: I0318 10:00:19.402435 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:20.402051 master-0 kubenswrapper[8244]: I0318 10:00:20.401941 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:20.402051 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:20.402051 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:20.402051 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:20.402533 master-0 kubenswrapper[8244]: I0318 10:00:20.402052 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:21.401492 master-0 kubenswrapper[8244]: I0318 10:00:21.401397 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:21.401492 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:21.401492 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:21.401492 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:21.402090 master-0 kubenswrapper[8244]: I0318 10:00:21.401498 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:22.402328 master-0 kubenswrapper[8244]: I0318 10:00:22.402217 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:22.402328 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:22.402328 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:22.402328 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:22.402328 master-0 kubenswrapper[8244]: I0318 10:00:22.402311 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:23.402609 master-0 kubenswrapper[8244]: I0318 10:00:23.402478 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:23.402609 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:23.402609 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:23.402609 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:23.403745 master-0 kubenswrapper[8244]: I0318 10:00:23.403697 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:24.401881 master-0 kubenswrapper[8244]: I0318 10:00:24.401739 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:24.401881 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:24.401881 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:24.401881 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:24.402353 master-0 kubenswrapper[8244]: I0318 10:00:24.401905 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:25.402529 master-0 kubenswrapper[8244]: I0318 10:00:25.402401 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:25.402529 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:25.402529 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:25.402529 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:25.403719 master-0 kubenswrapper[8244]: I0318 10:00:25.402540 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:26.401785 master-0 kubenswrapper[8244]: I0318 10:00:26.401706 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:00:26.401785 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:00:26.401785 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:00:26.401785 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:00:26.402246 master-0 kubenswrapper[8244]: I0318 10:00:26.401799 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:00:26.402246 master-0 kubenswrapper[8244]: I0318 10:00:26.401912 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:00:26.402946 master-0 kubenswrapper[8244]: I0318 10:00:26.402809 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"83d2d113ec64b26f85c2da77fcf83ffd1c0559babf05a97c582bf5bda8d8a7a5"} pod="openshift-ingress/router-default-7dcf5569b5-82tbk" containerMessage="Container router failed startup probe, will be restarted" Mar 18 10:00:26.403685 master-0 kubenswrapper[8244]: I0318 10:00:26.402951 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" containerID="cri-o://83d2d113ec64b26f85c2da77fcf83ffd1c0559babf05a97c582bf5bda8d8a7a5" gracePeriod=3600 Mar 18 10:01:13.382658 master-0 kubenswrapper[8244]: I0318 10:01:13.382572 8244 generic.go:334] "Generic (PLEG): container finished" podID="43d54514-989c-4c82-93f9-153b44eacdd1" containerID="83d2d113ec64b26f85c2da77fcf83ffd1c0559babf05a97c582bf5bda8d8a7a5" exitCode=0 Mar 18 10:01:13.382658 master-0 kubenswrapper[8244]: I0318 10:01:13.382609 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerDied","Data":"83d2d113ec64b26f85c2da77fcf83ffd1c0559babf05a97c582bf5bda8d8a7a5"} Mar 18 10:01:13.383332 master-0 kubenswrapper[8244]: I0318 10:01:13.382726 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerStarted","Data":"0056d6e24bcc6dc57e3453a9e7f141adeb078909a14a7b6029f52e100df60161"} Mar 18 10:01:13.400636 master-0 kubenswrapper[8244]: I0318 10:01:13.400579 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:01:13.402150 master-0 kubenswrapper[8244]: I0318 10:01:13.401703 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:01:13.404803 master-0 kubenswrapper[8244]: I0318 10:01:13.404766 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:13.404803 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:13.404803 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:13.404803 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:13.405080 master-0 kubenswrapper[8244]: I0318 10:01:13.405050 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:14.400703 master-0 kubenswrapper[8244]: I0318 10:01:14.400597 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:14.400703 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:14.400703 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:14.400703 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:14.401893 master-0 kubenswrapper[8244]: I0318 10:01:14.400705 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:15.401877 master-0 kubenswrapper[8244]: I0318 10:01:15.401813 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:15.401877 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:15.401877 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:15.401877 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:15.402270 master-0 kubenswrapper[8244]: I0318 10:01:15.401898 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:16.401712 master-0 kubenswrapper[8244]: I0318 10:01:16.401642 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:16.401712 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:16.401712 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:16.401712 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:16.401712 master-0 kubenswrapper[8244]: I0318 10:01:16.401698 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:17.401955 master-0 kubenswrapper[8244]: I0318 10:01:17.401897 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:17.401955 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:17.401955 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:17.401955 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:17.403208 master-0 kubenswrapper[8244]: I0318 10:01:17.403158 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:18.401422 master-0 kubenswrapper[8244]: I0318 10:01:18.401359 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:18.401422 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:18.401422 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:18.401422 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:18.402400 master-0 kubenswrapper[8244]: I0318 10:01:18.402339 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:19.402644 master-0 kubenswrapper[8244]: I0318 10:01:19.402540 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:19.402644 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:19.402644 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:19.402644 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:19.403605 master-0 kubenswrapper[8244]: I0318 10:01:19.402648 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:20.402515 master-0 kubenswrapper[8244]: I0318 10:01:20.402456 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:20.402515 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:20.402515 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:20.402515 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:20.403665 master-0 kubenswrapper[8244]: I0318 10:01:20.403569 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:21.414732 master-0 kubenswrapper[8244]: I0318 10:01:21.414618 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:21.414732 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:21.414732 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:21.414732 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:21.415728 master-0 kubenswrapper[8244]: I0318 10:01:21.414748 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:22.401890 master-0 kubenswrapper[8244]: I0318 10:01:22.401779 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:22.401890 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:22.401890 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:22.401890 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:22.402312 master-0 kubenswrapper[8244]: I0318 10:01:22.401894 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:23.401438 master-0 kubenswrapper[8244]: I0318 10:01:23.401319 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:23.401438 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:23.401438 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:23.401438 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:23.402327 master-0 kubenswrapper[8244]: I0318 10:01:23.401457 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:24.402140 master-0 kubenswrapper[8244]: I0318 10:01:24.401981 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:24.402140 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:24.402140 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:24.402140 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:24.402744 master-0 kubenswrapper[8244]: I0318 10:01:24.402181 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:25.401644 master-0 kubenswrapper[8244]: I0318 10:01:25.401545 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:25.401644 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:25.401644 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:25.401644 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:25.401644 master-0 kubenswrapper[8244]: I0318 10:01:25.401616 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:26.401368 master-0 kubenswrapper[8244]: I0318 10:01:26.401278 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:26.401368 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:26.401368 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:26.401368 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:26.402179 master-0 kubenswrapper[8244]: I0318 10:01:26.401412 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:27.402063 master-0 kubenswrapper[8244]: I0318 10:01:27.401977 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:27.402063 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:27.402063 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:27.402063 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:27.402063 master-0 kubenswrapper[8244]: I0318 10:01:27.402058 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:28.402010 master-0 kubenswrapper[8244]: I0318 10:01:28.401899 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:28.402010 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:28.402010 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:28.402010 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:28.403082 master-0 kubenswrapper[8244]: I0318 10:01:28.402132 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:29.400871 master-0 kubenswrapper[8244]: I0318 10:01:29.400818 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:29.400871 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:29.400871 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:29.400871 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:29.401555 master-0 kubenswrapper[8244]: I0318 10:01:29.401528 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:30.400905 master-0 kubenswrapper[8244]: I0318 10:01:30.400858 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:30.400905 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:30.400905 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:30.400905 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:30.401445 master-0 kubenswrapper[8244]: I0318 10:01:30.400913 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:31.402463 master-0 kubenswrapper[8244]: I0318 10:01:31.402334 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:31.402463 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:31.402463 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:31.402463 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:31.402463 master-0 kubenswrapper[8244]: I0318 10:01:31.402457 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:32.402356 master-0 kubenswrapper[8244]: I0318 10:01:32.402258 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:32.402356 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:32.402356 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:32.402356 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:32.402356 master-0 kubenswrapper[8244]: I0318 10:01:32.402354 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:33.401031 master-0 kubenswrapper[8244]: I0318 10:01:33.400959 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:33.401031 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:33.401031 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:33.401031 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:33.401031 master-0 kubenswrapper[8244]: I0318 10:01:33.401021 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:34.401168 master-0 kubenswrapper[8244]: I0318 10:01:34.401112 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:34.401168 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:34.401168 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:34.401168 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:34.401713 master-0 kubenswrapper[8244]: I0318 10:01:34.401188 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:35.402028 master-0 kubenswrapper[8244]: I0318 10:01:35.401969 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:35.402028 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:35.402028 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:35.402028 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:35.402688 master-0 kubenswrapper[8244]: I0318 10:01:35.402039 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:36.402000 master-0 kubenswrapper[8244]: I0318 10:01:36.401899 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:36.402000 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:36.402000 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:36.402000 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:36.403109 master-0 kubenswrapper[8244]: I0318 10:01:36.402003 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:36.537750 master-0 kubenswrapper[8244]: I0318 10:01:36.537683 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/2.log" Mar 18 10:01:36.538331 master-0 kubenswrapper[8244]: I0318 10:01:36.538298 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/1.log" Mar 18 10:01:36.538850 master-0 kubenswrapper[8244]: I0318 10:01:36.538773 8244 generic.go:334] "Generic (PLEG): container finished" podID="accc57fb-75f5-4f89-9804-6ede7f77e27c" containerID="0d30b4f631b8eb9dde0a0925230da53e5145662b1505b3eb3b7912145bc9b9d7" exitCode=1 Mar 18 10:01:36.538911 master-0 kubenswrapper[8244]: I0318 10:01:36.538851 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerDied","Data":"0d30b4f631b8eb9dde0a0925230da53e5145662b1505b3eb3b7912145bc9b9d7"} Mar 18 10:01:36.538911 master-0 kubenswrapper[8244]: I0318 10:01:36.538891 8244 scope.go:117] "RemoveContainer" containerID="8be1e41fb91899198366216500a2564664d7ef8ef90cbe9f4c1e19358a42df09" Mar 18 10:01:36.539790 master-0 kubenswrapper[8244]: I0318 10:01:36.539749 8244 scope.go:117] "RemoveContainer" containerID="0d30b4f631b8eb9dde0a0925230da53e5145662b1505b3eb3b7912145bc9b9d7" Mar 18 10:01:36.540563 master-0 kubenswrapper[8244]: E0318 10:01:36.540387 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:01:37.401425 master-0 kubenswrapper[8244]: I0318 10:01:37.401346 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:37.401425 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:37.401425 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:37.401425 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:37.401739 master-0 kubenswrapper[8244]: I0318 10:01:37.401452 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:37.551061 master-0 kubenswrapper[8244]: I0318 10:01:37.551002 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/2.log" Mar 18 10:01:38.401474 master-0 kubenswrapper[8244]: I0318 10:01:38.401372 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:38.401474 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:38.401474 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:38.401474 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:38.401474 master-0 kubenswrapper[8244]: I0318 10:01:38.401445 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:39.401324 master-0 kubenswrapper[8244]: I0318 10:01:39.401255 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:39.401324 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:39.401324 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:39.401324 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:39.402051 master-0 kubenswrapper[8244]: I0318 10:01:39.401334 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:40.400894 master-0 kubenswrapper[8244]: I0318 10:01:40.400820 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:40.400894 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:40.400894 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:40.400894 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:40.401290 master-0 kubenswrapper[8244]: I0318 10:01:40.400922 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:41.402849 master-0 kubenswrapper[8244]: I0318 10:01:41.402746 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:41.402849 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:41.402849 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:41.402849 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:41.402849 master-0 kubenswrapper[8244]: I0318 10:01:41.402816 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:42.401892 master-0 kubenswrapper[8244]: I0318 10:01:42.401775 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:42.401892 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:42.401892 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:42.401892 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:42.402297 master-0 kubenswrapper[8244]: I0318 10:01:42.401903 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:43.402395 master-0 kubenswrapper[8244]: I0318 10:01:43.402327 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:43.402395 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:43.402395 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:43.402395 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:43.403230 master-0 kubenswrapper[8244]: I0318 10:01:43.402419 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:44.402093 master-0 kubenswrapper[8244]: I0318 10:01:44.401998 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:44.402093 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:44.402093 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:44.402093 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:44.403118 master-0 kubenswrapper[8244]: I0318 10:01:44.402126 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:45.402539 master-0 kubenswrapper[8244]: I0318 10:01:45.402352 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:45.402539 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:45.402539 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:45.402539 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:45.402539 master-0 kubenswrapper[8244]: I0318 10:01:45.402512 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:46.400988 master-0 kubenswrapper[8244]: I0318 10:01:46.400812 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:46.400988 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:46.400988 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:46.400988 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:46.400988 master-0 kubenswrapper[8244]: I0318 10:01:46.400986 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:47.401749 master-0 kubenswrapper[8244]: I0318 10:01:47.401654 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:47.401749 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:47.401749 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:47.401749 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:47.401749 master-0 kubenswrapper[8244]: I0318 10:01:47.401745 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:47.733478 master-0 kubenswrapper[8244]: I0318 10:01:47.733303 8244 scope.go:117] "RemoveContainer" containerID="0d30b4f631b8eb9dde0a0925230da53e5145662b1505b3eb3b7912145bc9b9d7" Mar 18 10:01:47.733998 master-0 kubenswrapper[8244]: E0318 10:01:47.733698 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:01:48.402052 master-0 kubenswrapper[8244]: I0318 10:01:48.401937 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:48.402052 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:48.402052 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:48.402052 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:48.402052 master-0 kubenswrapper[8244]: I0318 10:01:48.402043 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:49.401861 master-0 kubenswrapper[8244]: I0318 10:01:49.401724 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:49.401861 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:49.401861 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:49.401861 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:49.402959 master-0 kubenswrapper[8244]: I0318 10:01:49.401873 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:50.401747 master-0 kubenswrapper[8244]: I0318 10:01:50.401651 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:50.401747 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:50.401747 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:50.401747 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:50.401747 master-0 kubenswrapper[8244]: I0318 10:01:50.401767 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:51.401588 master-0 kubenswrapper[8244]: I0318 10:01:51.401484 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:51.401588 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:51.401588 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:51.401588 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:51.401588 master-0 kubenswrapper[8244]: I0318 10:01:51.401561 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:52.400610 master-0 kubenswrapper[8244]: I0318 10:01:52.400545 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:52.400610 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:52.400610 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:52.400610 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:52.400912 master-0 kubenswrapper[8244]: I0318 10:01:52.400620 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:53.401254 master-0 kubenswrapper[8244]: I0318 10:01:53.401140 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:53.401254 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:53.401254 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:53.401254 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:53.401254 master-0 kubenswrapper[8244]: I0318 10:01:53.401203 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:54.402750 master-0 kubenswrapper[8244]: I0318 10:01:54.402619 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:54.402750 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:54.402750 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:54.402750 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:54.404140 master-0 kubenswrapper[8244]: I0318 10:01:54.402758 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:55.401739 master-0 kubenswrapper[8244]: I0318 10:01:55.401670 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:55.401739 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:55.401739 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:55.401739 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:55.402266 master-0 kubenswrapper[8244]: I0318 10:01:55.401788 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:56.401377 master-0 kubenswrapper[8244]: I0318 10:01:56.401304 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:56.401377 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:56.401377 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:56.401377 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:56.402559 master-0 kubenswrapper[8244]: I0318 10:01:56.401939 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:57.402297 master-0 kubenswrapper[8244]: I0318 10:01:57.402204 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:57.402297 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:57.402297 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:57.402297 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:57.402297 master-0 kubenswrapper[8244]: I0318 10:01:57.402284 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:58.401507 master-0 kubenswrapper[8244]: I0318 10:01:58.401448 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:58.401507 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:58.401507 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:58.401507 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:58.401793 master-0 kubenswrapper[8244]: I0318 10:01:58.401520 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:59.400944 master-0 kubenswrapper[8244]: I0318 10:01:59.400843 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:01:59.400944 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:01:59.400944 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:01:59.400944 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:01:59.400944 master-0 kubenswrapper[8244]: I0318 10:01:59.400926 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:01:59.732959 master-0 kubenswrapper[8244]: I0318 10:01:59.732785 8244 scope.go:117] "RemoveContainer" containerID="0d30b4f631b8eb9dde0a0925230da53e5145662b1505b3eb3b7912145bc9b9d7" Mar 18 10:02:00.402052 master-0 kubenswrapper[8244]: I0318 10:02:00.401948 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:00.402052 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:00.402052 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:00.402052 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:00.403107 master-0 kubenswrapper[8244]: I0318 10:02:00.402041 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:00.727395 master-0 kubenswrapper[8244]: I0318 10:02:00.727283 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/2.log" Mar 18 10:02:00.727889 master-0 kubenswrapper[8244]: I0318 10:02:00.727809 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerStarted","Data":"19028a9b74d8fde675db8214cb7dc59516cd57bb8937a1e369ea219dd5ad277c"} Mar 18 10:02:01.401392 master-0 kubenswrapper[8244]: I0318 10:02:01.401286 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:01.401392 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:01.401392 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:01.401392 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:01.401392 master-0 kubenswrapper[8244]: I0318 10:02:01.401360 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:02.401639 master-0 kubenswrapper[8244]: I0318 10:02:02.401553 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:02.401639 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:02.401639 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:02.401639 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:02.401639 master-0 kubenswrapper[8244]: I0318 10:02:02.401632 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:03.401283 master-0 kubenswrapper[8244]: I0318 10:02:03.401238 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:03.401283 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:03.401283 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:03.401283 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:03.401622 master-0 kubenswrapper[8244]: I0318 10:02:03.401305 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:04.402505 master-0 kubenswrapper[8244]: I0318 10:02:04.402425 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:04.402505 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:04.402505 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:04.402505 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:04.403531 master-0 kubenswrapper[8244]: I0318 10:02:04.402527 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:05.402404 master-0 kubenswrapper[8244]: I0318 10:02:05.402329 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:05.402404 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:05.402404 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:05.402404 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:05.403768 master-0 kubenswrapper[8244]: I0318 10:02:05.402423 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:06.400944 master-0 kubenswrapper[8244]: I0318 10:02:06.400876 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:06.400944 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:06.400944 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:06.400944 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:06.401480 master-0 kubenswrapper[8244]: I0318 10:02:06.400947 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:07.402068 master-0 kubenswrapper[8244]: I0318 10:02:07.402003 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:07.402068 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:07.402068 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:07.402068 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:07.402918 master-0 kubenswrapper[8244]: I0318 10:02:07.402081 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:08.402003 master-0 kubenswrapper[8244]: I0318 10:02:08.401778 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:08.402003 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:08.402003 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:08.402003 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:08.403062 master-0 kubenswrapper[8244]: I0318 10:02:08.402049 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:09.401973 master-0 kubenswrapper[8244]: I0318 10:02:09.401906 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:09.401973 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:09.401973 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:09.401973 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:09.403012 master-0 kubenswrapper[8244]: I0318 10:02:09.401981 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:10.401466 master-0 kubenswrapper[8244]: I0318 10:02:10.401367 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:10.401466 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:10.401466 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:10.401466 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:10.402009 master-0 kubenswrapper[8244]: I0318 10:02:10.401485 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:11.402131 master-0 kubenswrapper[8244]: I0318 10:02:11.402021 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:11.402131 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:11.402131 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:11.402131 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:11.402131 master-0 kubenswrapper[8244]: I0318 10:02:11.402130 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:12.402674 master-0 kubenswrapper[8244]: I0318 10:02:12.402574 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:12.402674 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:12.402674 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:12.402674 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:12.403728 master-0 kubenswrapper[8244]: I0318 10:02:12.402680 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:13.402627 master-0 kubenswrapper[8244]: I0318 10:02:13.402524 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:13.402627 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:13.402627 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:13.402627 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:13.402627 master-0 kubenswrapper[8244]: I0318 10:02:13.402617 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:14.402091 master-0 kubenswrapper[8244]: I0318 10:02:14.401971 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:14.402091 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:14.402091 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:14.402091 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:14.402091 master-0 kubenswrapper[8244]: I0318 10:02:14.402076 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:15.400743 master-0 kubenswrapper[8244]: I0318 10:02:15.400679 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:15.400743 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:15.400743 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:15.400743 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:15.401214 master-0 kubenswrapper[8244]: I0318 10:02:15.400737 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:16.401939 master-0 kubenswrapper[8244]: I0318 10:02:16.401797 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:16.401939 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:16.401939 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:16.401939 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:16.402937 master-0 kubenswrapper[8244]: I0318 10:02:16.401954 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:17.401178 master-0 kubenswrapper[8244]: I0318 10:02:17.401117 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:17.401178 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:17.401178 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:17.401178 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:17.401709 master-0 kubenswrapper[8244]: I0318 10:02:17.401193 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:18.413681 master-0 kubenswrapper[8244]: I0318 10:02:18.413558 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:18.413681 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:18.413681 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:18.413681 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:18.413681 master-0 kubenswrapper[8244]: I0318 10:02:18.413674 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:19.402433 master-0 kubenswrapper[8244]: I0318 10:02:19.402294 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:19.402433 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:19.402433 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:19.402433 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:19.402433 master-0 kubenswrapper[8244]: I0318 10:02:19.402419 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:20.401226 master-0 kubenswrapper[8244]: I0318 10:02:20.401178 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:20.401226 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:20.401226 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:20.401226 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:20.402163 master-0 kubenswrapper[8244]: I0318 10:02:20.402090 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:21.401411 master-0 kubenswrapper[8244]: I0318 10:02:21.401279 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:21.401411 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:21.401411 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:21.401411 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:21.402131 master-0 kubenswrapper[8244]: I0318 10:02:21.401445 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:22.400497 master-0 kubenswrapper[8244]: I0318 10:02:22.400447 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:22.400497 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:22.400497 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:22.400497 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:22.400766 master-0 kubenswrapper[8244]: I0318 10:02:22.400520 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:23.401781 master-0 kubenswrapper[8244]: I0318 10:02:23.401666 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:23.401781 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:23.401781 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:23.401781 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:23.403333 master-0 kubenswrapper[8244]: I0318 10:02:23.401779 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:24.401538 master-0 kubenswrapper[8244]: I0318 10:02:24.401427 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:24.401538 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:24.401538 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:24.401538 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:24.402541 master-0 kubenswrapper[8244]: I0318 10:02:24.401569 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:25.402795 master-0 kubenswrapper[8244]: I0318 10:02:25.402735 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:25.402795 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:25.402795 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:25.402795 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:25.403898 master-0 kubenswrapper[8244]: I0318 10:02:25.402809 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:26.401488 master-0 kubenswrapper[8244]: I0318 10:02:26.401374 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:26.401488 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:26.401488 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:26.401488 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:26.402075 master-0 kubenswrapper[8244]: I0318 10:02:26.401489 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:27.401406 master-0 kubenswrapper[8244]: I0318 10:02:27.401319 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:27.401406 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:27.401406 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:27.401406 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:27.402065 master-0 kubenswrapper[8244]: I0318 10:02:27.401433 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:28.406172 master-0 kubenswrapper[8244]: I0318 10:02:28.406111 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:28.406172 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:28.406172 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:28.406172 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:28.407251 master-0 kubenswrapper[8244]: I0318 10:02:28.406956 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:29.402943 master-0 kubenswrapper[8244]: I0318 10:02:29.402801 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:29.402943 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:29.402943 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:29.402943 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:29.402943 master-0 kubenswrapper[8244]: I0318 10:02:29.402921 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:30.402427 master-0 kubenswrapper[8244]: I0318 10:02:30.402339 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:30.402427 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:30.402427 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:30.402427 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:30.403479 master-0 kubenswrapper[8244]: I0318 10:02:30.402454 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:31.402432 master-0 kubenswrapper[8244]: I0318 10:02:31.402343 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:31.402432 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:31.402432 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:31.402432 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:31.403575 master-0 kubenswrapper[8244]: I0318 10:02:31.402427 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:32.402857 master-0 kubenswrapper[8244]: I0318 10:02:32.402737 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:32.402857 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:32.402857 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:32.402857 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:32.403915 master-0 kubenswrapper[8244]: I0318 10:02:32.402902 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:33.401854 master-0 kubenswrapper[8244]: I0318 10:02:33.401766 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:33.401854 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:33.401854 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:33.401854 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:33.402275 master-0 kubenswrapper[8244]: I0318 10:02:33.401885 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:34.402511 master-0 kubenswrapper[8244]: I0318 10:02:34.402438 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:34.402511 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:34.402511 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:34.402511 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:34.403297 master-0 kubenswrapper[8244]: I0318 10:02:34.402540 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:35.401346 master-0 kubenswrapper[8244]: I0318 10:02:35.401287 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:35.401346 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:35.401346 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:35.401346 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:35.401793 master-0 kubenswrapper[8244]: I0318 10:02:35.401361 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:36.402148 master-0 kubenswrapper[8244]: I0318 10:02:36.402063 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:36.402148 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:36.402148 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:36.402148 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:36.402148 master-0 kubenswrapper[8244]: I0318 10:02:36.402164 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:37.401894 master-0 kubenswrapper[8244]: I0318 10:02:37.401783 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:37.401894 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:37.401894 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:37.401894 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:37.402351 master-0 kubenswrapper[8244]: I0318 10:02:37.401933 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:38.401065 master-0 kubenswrapper[8244]: I0318 10:02:38.400984 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:38.401065 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:38.401065 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:38.401065 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:38.402071 master-0 kubenswrapper[8244]: I0318 10:02:38.401095 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:39.402036 master-0 kubenswrapper[8244]: I0318 10:02:39.401964 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:39.402036 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:39.402036 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:39.402036 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:39.403306 master-0 kubenswrapper[8244]: I0318 10:02:39.402052 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:40.402947 master-0 kubenswrapper[8244]: I0318 10:02:40.402324 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:40.402947 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:40.402947 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:40.402947 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:40.402947 master-0 kubenswrapper[8244]: I0318 10:02:40.402429 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:41.401731 master-0 kubenswrapper[8244]: I0318 10:02:41.401662 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:41.401731 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:41.401731 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:41.401731 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:41.402492 master-0 kubenswrapper[8244]: I0318 10:02:41.402446 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:42.401511 master-0 kubenswrapper[8244]: I0318 10:02:42.401362 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:42.401511 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:42.401511 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:42.401511 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:42.401511 master-0 kubenswrapper[8244]: I0318 10:02:42.401441 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:43.400969 master-0 kubenswrapper[8244]: I0318 10:02:43.400877 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:43.400969 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:43.400969 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:43.400969 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:43.401444 master-0 kubenswrapper[8244]: I0318 10:02:43.400968 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:44.006849 master-0 kubenswrapper[8244]: I0318 10:02:44.006759 8244 scope.go:117] "RemoveContainer" containerID="614bad60cc203e379c2219ece0e463fc923ffaef207f86d7d7dbe59e9131f846" Mar 18 10:02:44.401866 master-0 kubenswrapper[8244]: I0318 10:02:44.401733 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:44.401866 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:44.401866 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:44.401866 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:44.402489 master-0 kubenswrapper[8244]: I0318 10:02:44.402432 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:45.401804 master-0 kubenswrapper[8244]: I0318 10:02:45.401753 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:45.401804 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:45.401804 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:45.401804 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:45.402610 master-0 kubenswrapper[8244]: I0318 10:02:45.402574 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:46.401765 master-0 kubenswrapper[8244]: I0318 10:02:46.401691 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:46.401765 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:46.401765 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:46.401765 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:46.402396 master-0 kubenswrapper[8244]: I0318 10:02:46.401786 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:47.401763 master-0 kubenswrapper[8244]: I0318 10:02:47.401629 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:47.401763 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:47.401763 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:47.401763 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:47.401763 master-0 kubenswrapper[8244]: I0318 10:02:47.401741 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:48.401475 master-0 kubenswrapper[8244]: I0318 10:02:48.401377 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:48.401475 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:48.401475 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:48.401475 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:48.401475 master-0 kubenswrapper[8244]: I0318 10:02:48.401468 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:49.400594 master-0 kubenswrapper[8244]: I0318 10:02:49.400523 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:49.400594 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:49.400594 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:49.400594 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:49.401198 master-0 kubenswrapper[8244]: I0318 10:02:49.400602 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:50.401869 master-0 kubenswrapper[8244]: I0318 10:02:50.401752 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:50.401869 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:50.401869 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:50.401869 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:50.402851 master-0 kubenswrapper[8244]: I0318 10:02:50.401864 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:51.401386 master-0 kubenswrapper[8244]: I0318 10:02:51.401272 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:51.401386 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:51.401386 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:51.401386 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:51.401728 master-0 kubenswrapper[8244]: I0318 10:02:51.401396 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:52.402683 master-0 kubenswrapper[8244]: I0318 10:02:52.402565 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:52.402683 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:52.402683 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:52.402683 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:52.403682 master-0 kubenswrapper[8244]: I0318 10:02:52.402685 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:53.401767 master-0 kubenswrapper[8244]: I0318 10:02:53.401686 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:53.401767 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:53.401767 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:53.401767 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:53.402438 master-0 kubenswrapper[8244]: I0318 10:02:53.401772 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:54.401054 master-0 kubenswrapper[8244]: I0318 10:02:54.400991 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:54.401054 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:54.401054 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:54.401054 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:54.401732 master-0 kubenswrapper[8244]: I0318 10:02:54.401079 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:55.405539 master-0 kubenswrapper[8244]: I0318 10:02:55.405484 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:55.405539 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:55.405539 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:55.405539 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:55.406203 master-0 kubenswrapper[8244]: I0318 10:02:55.405540 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:56.179265 master-0 kubenswrapper[8244]: I0318 10:02:56.179200 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 10:02:56.180107 master-0 kubenswrapper[8244]: I0318 10:02:56.180084 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.182061 master-0 kubenswrapper[8244]: I0318 10:02:56.182005 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-76rsr" Mar 18 10:02:56.182550 master-0 kubenswrapper[8244]: I0318 10:02:56.182472 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 10:02:56.195817 master-0 kubenswrapper[8244]: I0318 10:02:56.195738 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 10:02:56.322333 master-0 kubenswrapper[8244]: I0318 10:02:56.322271 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.322558 master-0 kubenswrapper[8244]: I0318 10:02:56.322371 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/346d6f79-a9bd-4097-abe7-b68830aa2e84-kube-api-access\") pod \"installer-3-master-0\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.322558 master-0 kubenswrapper[8244]: I0318 10:02:56.322411 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-var-lock\") pod \"installer-3-master-0\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.402495 master-0 kubenswrapper[8244]: I0318 10:02:56.402382 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:56.402495 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:56.402495 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:56.402495 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:56.402882 master-0 kubenswrapper[8244]: I0318 10:02:56.402524 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:56.423851 master-0 kubenswrapper[8244]: I0318 10:02:56.423753 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-var-lock\") pod \"installer-3-master-0\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.424451 master-0 kubenswrapper[8244]: I0318 10:02:56.423957 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.424451 master-0 kubenswrapper[8244]: I0318 10:02:56.423974 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-var-lock\") pod \"installer-3-master-0\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.424451 master-0 kubenswrapper[8244]: I0318 10:02:56.424104 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.424451 master-0 kubenswrapper[8244]: I0318 10:02:56.424233 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/346d6f79-a9bd-4097-abe7-b68830aa2e84-kube-api-access\") pod \"installer-3-master-0\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.444052 master-0 kubenswrapper[8244]: I0318 10:02:56.443914 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/346d6f79-a9bd-4097-abe7-b68830aa2e84-kube-api-access\") pod \"installer-3-master-0\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:56.526003 master-0 kubenswrapper[8244]: I0318 10:02:56.525924 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:02:57.059881 master-0 kubenswrapper[8244]: I0318 10:02:57.059538 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 10:02:57.064130 master-0 kubenswrapper[8244]: W0318 10:02:57.064065 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod346d6f79_a9bd_4097_abe7_b68830aa2e84.slice/crio-44fff1e61adbaef01d35b3cb7a668fee655369026524529c8495c49a8dde5128 WatchSource:0}: Error finding container 44fff1e61adbaef01d35b3cb7a668fee655369026524529c8495c49a8dde5128: Status 404 returned error can't find the container with id 44fff1e61adbaef01d35b3cb7a668fee655369026524529c8495c49a8dde5128 Mar 18 10:02:57.151638 master-0 kubenswrapper[8244]: I0318 10:02:57.151543 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"346d6f79-a9bd-4097-abe7-b68830aa2e84","Type":"ContainerStarted","Data":"44fff1e61adbaef01d35b3cb7a668fee655369026524529c8495c49a8dde5128"} Mar 18 10:02:57.401404 master-0 kubenswrapper[8244]: I0318 10:02:57.401279 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:57.401404 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:57.401404 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:57.401404 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:57.401404 master-0 kubenswrapper[8244]: I0318 10:02:57.401354 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:58.171993 master-0 kubenswrapper[8244]: I0318 10:02:58.171917 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"346d6f79-a9bd-4097-abe7-b68830aa2e84","Type":"ContainerStarted","Data":"974b6ae008035f16bd3f106b986b5975e658b69a9a1e106bd2d280e49e6fba6d"} Mar 18 10:02:58.234566 master-0 kubenswrapper[8244]: I0318 10:02:58.234458 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.234439048 podStartE2EDuration="2.234439048s" podCreationTimestamp="2026-03-18 10:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:02:58.23334226 +0000 UTC m=+494.713078388" watchObservedRunningTime="2026-03-18 10:02:58.234439048 +0000 UTC m=+494.714175186" Mar 18 10:02:58.403102 master-0 kubenswrapper[8244]: I0318 10:02:58.401842 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:58.403102 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:58.403102 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:58.403102 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:58.403102 master-0 kubenswrapper[8244]: I0318 10:02:58.401913 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:02:59.401155 master-0 kubenswrapper[8244]: I0318 10:02:59.401088 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:02:59.401155 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:02:59.401155 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:02:59.401155 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:02:59.402028 master-0 kubenswrapper[8244]: I0318 10:02:59.401219 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:00.200730 master-0 kubenswrapper[8244]: I0318 10:03:00.200622 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 10:03:00.201583 master-0 kubenswrapper[8244]: I0318 10:03:00.201505 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.204050 master-0 kubenswrapper[8244]: I0318 10:03:00.203967 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 10:03:00.205014 master-0 kubenswrapper[8244]: I0318 10:03:00.204948 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-zr9bx" Mar 18 10:03:00.222108 master-0 kubenswrapper[8244]: I0318 10:03:00.222014 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 10:03:00.282865 master-0 kubenswrapper[8244]: I0318 10:03:00.282790 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.282865 master-0 kubenswrapper[8244]: I0318 10:03:00.282874 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-var-lock\") pod \"installer-5-master-0\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.283172 master-0 kubenswrapper[8244]: I0318 10:03:00.282910 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.383975 master-0 kubenswrapper[8244]: I0318 10:03:00.383924 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.384188 master-0 kubenswrapper[8244]: I0318 10:03:00.383991 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-var-lock\") pod \"installer-5-master-0\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.384188 master-0 kubenswrapper[8244]: I0318 10:03:00.384104 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.384256 master-0 kubenswrapper[8244]: I0318 10:03:00.384222 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-var-lock\") pod \"installer-5-master-0\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.384256 master-0 kubenswrapper[8244]: I0318 10:03:00.384196 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.402410 master-0 kubenswrapper[8244]: I0318 10:03:00.402329 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:00.402410 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:00.402410 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:00.402410 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:00.403084 master-0 kubenswrapper[8244]: I0318 10:03:00.402412 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:00.410440 master-0 kubenswrapper[8244]: I0318 10:03:00.410397 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.531084 master-0 kubenswrapper[8244]: I0318 10:03:00.530905 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:00.981186 master-0 kubenswrapper[8244]: I0318 10:03:00.981089 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 10:03:00.992404 master-0 kubenswrapper[8244]: W0318 10:03:00.989909 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1c62ceda_5e7e_4392_83b9_0d80856e1a96.slice/crio-9b455e2d76fdd49301fe2af949c3adea4b9e18edfc2b50e8b9cd691e2613e68a WatchSource:0}: Error finding container 9b455e2d76fdd49301fe2af949c3adea4b9e18edfc2b50e8b9cd691e2613e68a: Status 404 returned error can't find the container with id 9b455e2d76fdd49301fe2af949c3adea4b9e18edfc2b50e8b9cd691e2613e68a Mar 18 10:03:01.196081 master-0 kubenswrapper[8244]: I0318 10:03:01.196037 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1c62ceda-5e7e-4392-83b9-0d80856e1a96","Type":"ContainerStarted","Data":"9b455e2d76fdd49301fe2af949c3adea4b9e18edfc2b50e8b9cd691e2613e68a"} Mar 18 10:03:01.402354 master-0 kubenswrapper[8244]: I0318 10:03:01.402224 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:01.402354 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:01.402354 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:01.402354 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:01.402354 master-0 kubenswrapper[8244]: I0318 10:03:01.402331 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:02.207046 master-0 kubenswrapper[8244]: I0318 10:03:02.206953 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1c62ceda-5e7e-4392-83b9-0d80856e1a96","Type":"ContainerStarted","Data":"64fd17a4dc869dbbdd2a4f39ac14053290f921c096dddb0c79f7bc300e3e1965"} Mar 18 10:03:02.233493 master-0 kubenswrapper[8244]: I0318 10:03:02.233366 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.233349382 podStartE2EDuration="2.233349382s" podCreationTimestamp="2026-03-18 10:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:03:02.228636683 +0000 UTC m=+498.708372811" watchObservedRunningTime="2026-03-18 10:03:02.233349382 +0000 UTC m=+498.713085520" Mar 18 10:03:02.401676 master-0 kubenswrapper[8244]: I0318 10:03:02.401564 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:02.401676 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:02.401676 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:02.401676 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:02.402226 master-0 kubenswrapper[8244]: I0318 10:03:02.401680 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:03.401073 master-0 kubenswrapper[8244]: I0318 10:03:03.401010 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:03.401073 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:03.401073 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:03.401073 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:03.401665 master-0 kubenswrapper[8244]: I0318 10:03:03.401089 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:04.402129 master-0 kubenswrapper[8244]: I0318 10:03:04.402032 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:04.402129 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:04.402129 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:04.402129 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:04.402129 master-0 kubenswrapper[8244]: I0318 10:03:04.402119 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:05.401277 master-0 kubenswrapper[8244]: I0318 10:03:05.401214 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:05.401277 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:05.401277 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:05.401277 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:05.401737 master-0 kubenswrapper[8244]: I0318 10:03:05.401298 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:06.402228 master-0 kubenswrapper[8244]: I0318 10:03:06.402140 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:06.402228 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:06.402228 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:06.402228 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:06.403159 master-0 kubenswrapper[8244]: I0318 10:03:06.402244 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:07.401867 master-0 kubenswrapper[8244]: I0318 10:03:07.401725 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:07.401867 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:07.401867 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:07.401867 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:07.401867 master-0 kubenswrapper[8244]: I0318 10:03:07.401849 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:08.401201 master-0 kubenswrapper[8244]: I0318 10:03:08.401127 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:08.401201 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:08.401201 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:08.401201 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:08.401201 master-0 kubenswrapper[8244]: I0318 10:03:08.401193 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:09.401941 master-0 kubenswrapper[8244]: I0318 10:03:09.401857 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:09.401941 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:09.401941 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:09.401941 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:09.402934 master-0 kubenswrapper[8244]: I0318 10:03:09.401968 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:10.402254 master-0 kubenswrapper[8244]: I0318 10:03:10.402181 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:10.402254 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:10.402254 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:10.402254 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:10.403246 master-0 kubenswrapper[8244]: I0318 10:03:10.402280 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:11.402194 master-0 kubenswrapper[8244]: I0318 10:03:11.402118 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:11.402194 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:11.402194 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:11.402194 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:11.402194 master-0 kubenswrapper[8244]: I0318 10:03:11.402211 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:12.402214 master-0 kubenswrapper[8244]: I0318 10:03:12.402135 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:12.402214 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:12.402214 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:12.402214 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:12.403631 master-0 kubenswrapper[8244]: I0318 10:03:12.402232 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:12.403631 master-0 kubenswrapper[8244]: I0318 10:03:12.402307 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:03:12.403631 master-0 kubenswrapper[8244]: I0318 10:03:12.403220 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"0056d6e24bcc6dc57e3453a9e7f141adeb078909a14a7b6029f52e100df60161"} pod="openshift-ingress/router-default-7dcf5569b5-82tbk" containerMessage="Container router failed startup probe, will be restarted" Mar 18 10:03:12.403631 master-0 kubenswrapper[8244]: I0318 10:03:12.403281 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" containerID="cri-o://0056d6e24bcc6dc57e3453a9e7f141adeb078909a14a7b6029f52e100df60161" gracePeriod=3600 Mar 18 10:03:14.193965 master-0 kubenswrapper[8244]: I0318 10:03:14.193913 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-s8k7j"] Mar 18 10:03:14.195311 master-0 kubenswrapper[8244]: I0318 10:03:14.195291 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.197368 master-0 kubenswrapper[8244]: I0318 10:03:14.197309 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 18 10:03:14.197709 master-0 kubenswrapper[8244]: I0318 10:03:14.197678 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-rchwt" Mar 18 10:03:14.208752 master-0 kubenswrapper[8244]: I0318 10:03:14.208703 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e971a41-f0bc-4847-9391-6c03dd4185a6-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.208982 master-0 kubenswrapper[8244]: I0318 10:03:14.208855 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lsx\" (UniqueName: \"kubernetes.io/projected/5e971a41-f0bc-4847-9391-6c03dd4185a6-kube-api-access-w6lsx\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.208982 master-0 kubenswrapper[8244]: I0318 10:03:14.208971 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5e971a41-f0bc-4847-9391-6c03dd4185a6-ready\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.209090 master-0 kubenswrapper[8244]: I0318 10:03:14.209029 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e971a41-f0bc-4847-9391-6c03dd4185a6-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.310299 master-0 kubenswrapper[8244]: I0318 10:03:14.310234 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e971a41-f0bc-4847-9391-6c03dd4185a6-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.310540 master-0 kubenswrapper[8244]: I0318 10:03:14.310320 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lsx\" (UniqueName: \"kubernetes.io/projected/5e971a41-f0bc-4847-9391-6c03dd4185a6-kube-api-access-w6lsx\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.310540 master-0 kubenswrapper[8244]: I0318 10:03:14.310378 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5e971a41-f0bc-4847-9391-6c03dd4185a6-ready\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.310540 master-0 kubenswrapper[8244]: I0318 10:03:14.310417 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e971a41-f0bc-4847-9391-6c03dd4185a6-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.310540 master-0 kubenswrapper[8244]: I0318 10:03:14.310471 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e971a41-f0bc-4847-9391-6c03dd4185a6-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.311417 master-0 kubenswrapper[8244]: I0318 10:03:14.311377 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5e971a41-f0bc-4847-9391-6c03dd4185a6-ready\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.311506 master-0 kubenswrapper[8244]: I0318 10:03:14.311406 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e971a41-f0bc-4847-9391-6c03dd4185a6-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.331741 master-0 kubenswrapper[8244]: I0318 10:03:14.331697 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lsx\" (UniqueName: \"kubernetes.io/projected/5e971a41-f0bc-4847-9391-6c03dd4185a6-kube-api-access-w6lsx\") pod \"cni-sysctl-allowlist-ds-s8k7j\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:14.512074 master-0 kubenswrapper[8244]: I0318 10:03:14.511947 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:15.311059 master-0 kubenswrapper[8244]: I0318 10:03:15.311000 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" event={"ID":"5e971a41-f0bc-4847-9391-6c03dd4185a6","Type":"ContainerStarted","Data":"99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16"} Mar 18 10:03:15.311059 master-0 kubenswrapper[8244]: I0318 10:03:15.311052 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" event={"ID":"5e971a41-f0bc-4847-9391-6c03dd4185a6","Type":"ContainerStarted","Data":"05391b559584b61eed691de160fd743945d67b3f396cbfb6ffe9983f7f3835e8"} Mar 18 10:03:15.311683 master-0 kubenswrapper[8244]: I0318 10:03:15.311412 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:15.331636 master-0 kubenswrapper[8244]: I0318 10:03:15.331556 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" podStartSLOduration=1.331533933 podStartE2EDuration="1.331533933s" podCreationTimestamp="2026-03-18 10:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:03:15.329892311 +0000 UTC m=+511.809628449" watchObservedRunningTime="2026-03-18 10:03:15.331533933 +0000 UTC m=+511.811270071" Mar 18 10:03:15.332102 master-0 kubenswrapper[8244]: I0318 10:03:15.332054 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:16.125655 master-0 kubenswrapper[8244]: I0318 10:03:16.125585 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-s8k7j"] Mar 18 10:03:17.328699 master-0 kubenswrapper[8244]: I0318 10:03:17.328606 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" podUID="5e971a41-f0bc-4847-9391-6c03dd4185a6" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" gracePeriod=30 Mar 18 10:03:19.873735 master-0 kubenswrapper[8244]: I0318 10:03:19.873678 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm"] Mar 18 10:03:19.875156 master-0 kubenswrapper[8244]: I0318 10:03:19.875121 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:19.878107 master-0 kubenswrapper[8244]: I0318 10:03:19.878074 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 10:03:19.878452 master-0 kubenswrapper[8244]: I0318 10:03:19.878429 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-gxfn6" Mar 18 10:03:19.878679 master-0 kubenswrapper[8244]: I0318 10:03:19.878658 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 10:03:19.878929 master-0 kubenswrapper[8244]: I0318 10:03:19.878906 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 10:03:19.879127 master-0 kubenswrapper[8244]: I0318 10:03:19.879106 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 10:03:19.879176 master-0 kubenswrapper[8244]: I0318 10:03:19.879160 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 10:03:19.891934 master-0 kubenswrapper[8244]: I0318 10:03:19.891081 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm"] Mar 18 10:03:19.899655 master-0 kubenswrapper[8244]: I0318 10:03:19.899599 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 10:03:19.900415 master-0 kubenswrapper[8244]: I0318 10:03:19.900065 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-serving-certs-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.001188 master-0 kubenswrapper[8244]: I0318 10:03:20.001127 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-serving-certs-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.001482 master-0 kubenswrapper[8244]: I0318 10:03:20.001388 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-federate-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.001590 master-0 kubenswrapper[8244]: I0318 10:03:20.001536 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.001664 master-0 kubenswrapper[8244]: I0318 10:03:20.001638 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-metrics-client-ca\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.001743 master-0 kubenswrapper[8244]: I0318 10:03:20.001686 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.001816 master-0 kubenswrapper[8244]: I0318 10:03:20.001758 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.001816 master-0 kubenswrapper[8244]: I0318 10:03:20.001845 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxf74\" (UniqueName: \"kubernetes.io/projected/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-kube-api-access-sxf74\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.002011 master-0 kubenswrapper[8244]: I0318 10:03:20.001925 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.002388 master-0 kubenswrapper[8244]: I0318 10:03:20.002341 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-serving-certs-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.103046 master-0 kubenswrapper[8244]: I0318 10:03:20.102964 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-metrics-client-ca\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.103046 master-0 kubenswrapper[8244]: I0318 10:03:20.103036 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.103382 master-0 kubenswrapper[8244]: I0318 10:03:20.103078 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.103382 master-0 kubenswrapper[8244]: I0318 10:03:20.103111 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxf74\" (UniqueName: \"kubernetes.io/projected/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-kube-api-access-sxf74\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.103382 master-0 kubenswrapper[8244]: I0318 10:03:20.103144 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.103596 master-0 kubenswrapper[8244]: I0318 10:03:20.103543 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-federate-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.103683 master-0 kubenswrapper[8244]: I0318 10:03:20.103621 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.104127 master-0 kubenswrapper[8244]: I0318 10:03:20.104077 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-metrics-client-ca\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.111306 master-0 kubenswrapper[8244]: I0318 10:03:20.106407 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.111306 master-0 kubenswrapper[8244]: I0318 10:03:20.106869 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.111306 master-0 kubenswrapper[8244]: I0318 10:03:20.106880 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.111306 master-0 kubenswrapper[8244]: I0318 10:03:20.108752 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.122948 master-0 kubenswrapper[8244]: I0318 10:03:20.113182 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-federate-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.131089 master-0 kubenswrapper[8244]: I0318 10:03:20.130967 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxf74\" (UniqueName: \"kubernetes.io/projected/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-kube-api-access-sxf74\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.188982 master-0 kubenswrapper[8244]: I0318 10:03:20.188901 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:03:20.699600 master-0 kubenswrapper[8244]: I0318 10:03:20.699521 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm"] Mar 18 10:03:20.703611 master-0 kubenswrapper[8244]: W0318 10:03:20.703145 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa4cba67_b5d4_46c2_8cad_1a1379f764cb.slice/crio-a8685da7c022ead7819bc14f1d28e93a2c0d8bd27bb5dc325c78a31a740e3f59 WatchSource:0}: Error finding container a8685da7c022ead7819bc14f1d28e93a2c0d8bd27bb5dc325c78a31a740e3f59: Status 404 returned error can't find the container with id a8685da7c022ead7819bc14f1d28e93a2c0d8bd27bb5dc325c78a31a740e3f59 Mar 18 10:03:20.708690 master-0 kubenswrapper[8244]: I0318 10:03:20.708634 8244 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 10:03:21.358745 master-0 kubenswrapper[8244]: I0318 10:03:21.358596 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" event={"ID":"aa4cba67-b5d4-46c2-8cad-1a1379f764cb","Type":"ContainerStarted","Data":"a8685da7c022ead7819bc14f1d28e93a2c0d8bd27bb5dc325c78a31a740e3f59"} Mar 18 10:03:23.376133 master-0 kubenswrapper[8244]: I0318 10:03:23.376026 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" event={"ID":"aa4cba67-b5d4-46c2-8cad-1a1379f764cb","Type":"ContainerStarted","Data":"e37487d88c3e2eb5a9d59f5f510a4fcd878891c201e54bf35b09af06a218db96"} Mar 18 10:03:24.329967 master-0 kubenswrapper[8244]: I0318 10:03:24.329871 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh"] Mar 18 10:03:24.332563 master-0 kubenswrapper[8244]: I0318 10:03:24.332502 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:03:24.337858 master-0 kubenswrapper[8244]: I0318 10:03:24.337777 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-mkddq" Mar 18 10:03:24.358493 master-0 kubenswrapper[8244]: I0318 10:03:24.358442 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh"] Mar 18 10:03:24.371881 master-0 kubenswrapper[8244]: I0318 10:03:24.371606 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f875878f-3588-42f1-9488-750d9f4582f8-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:03:24.373971 master-0 kubenswrapper[8244]: I0318 10:03:24.373939 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn7zt\" (UniqueName: \"kubernetes.io/projected/f875878f-3588-42f1-9488-750d9f4582f8-kube-api-access-nn7zt\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:03:24.475313 master-0 kubenswrapper[8244]: I0318 10:03:24.475222 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f875878f-3588-42f1-9488-750d9f4582f8-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:03:24.476018 master-0 kubenswrapper[8244]: I0318 10:03:24.475554 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn7zt\" (UniqueName: \"kubernetes.io/projected/f875878f-3588-42f1-9488-750d9f4582f8-kube-api-access-nn7zt\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:03:24.478925 master-0 kubenswrapper[8244]: I0318 10:03:24.478852 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f875878f-3588-42f1-9488-750d9f4582f8-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:03:24.495797 master-0 kubenswrapper[8244]: I0318 10:03:24.495724 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn7zt\" (UniqueName: \"kubernetes.io/projected/f875878f-3588-42f1-9488-750d9f4582f8-kube-api-access-nn7zt\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:03:24.514913 master-0 kubenswrapper[8244]: E0318 10:03:24.514791 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 10:03:24.517107 master-0 kubenswrapper[8244]: E0318 10:03:24.516994 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 10:03:24.518718 master-0 kubenswrapper[8244]: E0318 10:03:24.518645 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 10:03:24.518874 master-0 kubenswrapper[8244]: E0318 10:03:24.518738 8244 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" podUID="5e971a41-f0bc-4847-9391-6c03dd4185a6" containerName="kube-multus-additional-cni-plugins" Mar 18 10:03:24.671094 master-0 kubenswrapper[8244]: I0318 10:03:24.669117 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:03:25.224342 master-0 kubenswrapper[8244]: I0318 10:03:25.224281 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh"] Mar 18 10:03:25.229848 master-0 kubenswrapper[8244]: W0318 10:03:25.225776 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf875878f_3588_42f1_9488_750d9f4582f8.slice/crio-b7ca349d109c7ce47be51e023fb21ab1709798444b4c309eab6316772a1ee596 WatchSource:0}: Error finding container b7ca349d109c7ce47be51e023fb21ab1709798444b4c309eab6316772a1ee596: Status 404 returned error can't find the container with id b7ca349d109c7ce47be51e023fb21ab1709798444b4c309eab6316772a1ee596 Mar 18 10:03:25.396748 master-0 kubenswrapper[8244]: I0318 10:03:25.396689 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" event={"ID":"aa4cba67-b5d4-46c2-8cad-1a1379f764cb","Type":"ContainerStarted","Data":"4b26d5ece80919a8d693d86523c2c896d40a438e8514d99401fc608aeb721f9d"} Mar 18 10:03:25.396748 master-0 kubenswrapper[8244]: I0318 10:03:25.396744 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" event={"ID":"aa4cba67-b5d4-46c2-8cad-1a1379f764cb","Type":"ContainerStarted","Data":"f73d449b9a38a6878df6d6eacd2e579632687dee8a2574b92908414488941263"} Mar 18 10:03:25.398630 master-0 kubenswrapper[8244]: I0318 10:03:25.398248 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" event={"ID":"f875878f-3588-42f1-9488-750d9f4582f8","Type":"ContainerStarted","Data":"b7ca349d109c7ce47be51e023fb21ab1709798444b4c309eab6316772a1ee596"} Mar 18 10:03:25.493431 master-0 kubenswrapper[8244]: I0318 10:03:25.493368 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" podStartSLOduration=2.39654501 podStartE2EDuration="6.493350324s" podCreationTimestamp="2026-03-18 10:03:19 +0000 UTC" firstStartedPulling="2026-03-18 10:03:20.708461755 +0000 UTC m=+517.188197923" lastFinishedPulling="2026-03-18 10:03:24.805267109 +0000 UTC m=+521.285003237" observedRunningTime="2026-03-18 10:03:25.491189099 +0000 UTC m=+521.970925227" watchObservedRunningTime="2026-03-18 10:03:25.493350324 +0000 UTC m=+521.973086452" Mar 18 10:03:26.149997 master-0 kubenswrapper[8244]: I0318 10:03:26.149915 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7"] Mar 18 10:03:26.150256 master-0 kubenswrapper[8244]: I0318 10:03:26.150208 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" containerID="cri-o://a3602e50826c30fb0a6aafc5be0e48c4b539e69bcb2efce748d1524de14ad2a2" gracePeriod=30 Mar 18 10:03:26.157235 master-0 kubenswrapper[8244]: I0318 10:03:26.157188 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr"] Mar 18 10:03:26.157483 master-0 kubenswrapper[8244]: I0318 10:03:26.157453 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" podUID="3a9c36d0-e3f3-441e-bbab-44703a0eeb70" containerName="route-controller-manager" containerID="cri-o://e686ddb757c595904ac6ebc397e0c0f4d654d782c019f30f1e1bf1e5f427b30d" gracePeriod=30 Mar 18 10:03:26.413926 master-0 kubenswrapper[8244]: I0318 10:03:26.413118 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" event={"ID":"f875878f-3588-42f1-9488-750d9f4582f8","Type":"ContainerStarted","Data":"0fe709bc29589b7c73f4f842d8e7269139ca00bb476aae9ee0fa8e2d499e51d9"} Mar 18 10:03:26.413926 master-0 kubenswrapper[8244]: I0318 10:03:26.413183 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" event={"ID":"f875878f-3588-42f1-9488-750d9f4582f8","Type":"ContainerStarted","Data":"9dde9cc3162a3648ff9f22e55ed74427762826c51509b6d50c33d1daba0b5995"} Mar 18 10:03:26.415223 master-0 kubenswrapper[8244]: I0318 10:03:26.415008 8244 generic.go:334] "Generic (PLEG): container finished" podID="3a9c36d0-e3f3-441e-bbab-44703a0eeb70" containerID="e686ddb757c595904ac6ebc397e0c0f4d654d782c019f30f1e1bf1e5f427b30d" exitCode=0 Mar 18 10:03:26.415223 master-0 kubenswrapper[8244]: I0318 10:03:26.415078 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" event={"ID":"3a9c36d0-e3f3-441e-bbab-44703a0eeb70","Type":"ContainerDied","Data":"e686ddb757c595904ac6ebc397e0c0f4d654d782c019f30f1e1bf1e5f427b30d"} Mar 18 10:03:26.446371 master-0 kubenswrapper[8244]: I0318 10:03:26.446297 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" podStartSLOduration=2.446281682 podStartE2EDuration="2.446281682s" podCreationTimestamp="2026-03-18 10:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:03:26.444323632 +0000 UTC m=+522.924059770" watchObservedRunningTime="2026-03-18 10:03:26.446281682 +0000 UTC m=+522.926017810" Mar 18 10:03:26.452178 master-0 kubenswrapper[8244]: I0318 10:03:26.447068 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-f8f5f6bc4-87dt7_54e26470-5ffb-4673-9375-e80031cc6750/controller-manager/0.log" Mar 18 10:03:26.452178 master-0 kubenswrapper[8244]: I0318 10:03:26.447127 8244 generic.go:334] "Generic (PLEG): container finished" podID="54e26470-5ffb-4673-9375-e80031cc6750" containerID="a3602e50826c30fb0a6aafc5be0e48c4b539e69bcb2efce748d1524de14ad2a2" exitCode=0 Mar 18 10:03:26.452178 master-0 kubenswrapper[8244]: I0318 10:03:26.447993 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" event={"ID":"54e26470-5ffb-4673-9375-e80031cc6750","Type":"ContainerDied","Data":"a3602e50826c30fb0a6aafc5be0e48c4b539e69bcb2efce748d1524de14ad2a2"} Mar 18 10:03:26.452178 master-0 kubenswrapper[8244]: I0318 10:03:26.448031 8244 scope.go:117] "RemoveContainer" containerID="1248d2a0db71d324c2c95a679e324dd57a6ddd00508bb65cb77279b8a3a015b8" Mar 18 10:03:26.491925 master-0 kubenswrapper[8244]: I0318 10:03:26.491372 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2"] Mar 18 10:03:26.491925 master-0 kubenswrapper[8244]: I0318 10:03:26.491629 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" podUID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerName="multus-admission-controller" containerID="cri-o://d329cbff3f93c0797d55bbc4989994ef6bde775d852d69c46ec0c0eadff97f83" gracePeriod=30 Mar 18 10:03:26.492179 master-0 kubenswrapper[8244]: I0318 10:03:26.491927 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" podUID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerName="kube-rbac-proxy" containerID="cri-o://37124343fb8209ca549ff671c560cfcd2f841cdc0b622af9f05faea1d0440b44" gracePeriod=30 Mar 18 10:03:26.634918 master-0 kubenswrapper[8244]: I0318 10:03:26.634544 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 10:03:26.705144 master-0 kubenswrapper[8244]: I0318 10:03:26.705103 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 10:03:26.716122 master-0 kubenswrapper[8244]: I0318 10:03:26.716084 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-proxy-ca-bundles\") pod \"54e26470-5ffb-4673-9375-e80031cc6750\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " Mar 18 10:03:26.716209 master-0 kubenswrapper[8244]: I0318 10:03:26.716127 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-client-ca\") pod \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " Mar 18 10:03:26.716209 master-0 kubenswrapper[8244]: I0318 10:03:26.716178 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-client-ca\") pod \"54e26470-5ffb-4673-9375-e80031cc6750\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " Mar 18 10:03:26.716298 master-0 kubenswrapper[8244]: I0318 10:03:26.716270 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-serving-cert\") pod \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " Mar 18 10:03:26.716838 master-0 kubenswrapper[8244]: I0318 10:03:26.716717 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a9c36d0-e3f3-441e-bbab-44703a0eeb70" (UID: "3a9c36d0-e3f3-441e-bbab-44703a0eeb70"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:03:26.716838 master-0 kubenswrapper[8244]: I0318 10:03:26.716753 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "54e26470-5ffb-4673-9375-e80031cc6750" (UID: "54e26470-5ffb-4673-9375-e80031cc6750"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:03:26.716838 master-0 kubenswrapper[8244]: I0318 10:03:26.716792 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-client-ca" (OuterVolumeSpecName: "client-ca") pod "54e26470-5ffb-4673-9375-e80031cc6750" (UID: "54e26470-5ffb-4673-9375-e80031cc6750"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:03:26.716999 master-0 kubenswrapper[8244]: I0318 10:03:26.716930 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-config\") pod \"54e26470-5ffb-4673-9375-e80031cc6750\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " Mar 18 10:03:26.717632 master-0 kubenswrapper[8244]: I0318 10:03:26.717583 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-config" (OuterVolumeSpecName: "config") pod "54e26470-5ffb-4673-9375-e80031cc6750" (UID: "54e26470-5ffb-4673-9375-e80031cc6750"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:03:26.717696 master-0 kubenswrapper[8244]: I0318 10:03:26.717646 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54e26470-5ffb-4673-9375-e80031cc6750-serving-cert\") pod \"54e26470-5ffb-4673-9375-e80031cc6750\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " Mar 18 10:03:26.717696 master-0 kubenswrapper[8244]: I0318 10:03:26.717677 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-config\") pod \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " Mar 18 10:03:26.720020 master-0 kubenswrapper[8244]: I0318 10:03:26.717765 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jqfd\" (UniqueName: \"kubernetes.io/projected/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-kube-api-access-6jqfd\") pod \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\" (UID: \"3a9c36d0-e3f3-441e-bbab-44703a0eeb70\") " Mar 18 10:03:26.720020 master-0 kubenswrapper[8244]: I0318 10:03:26.719357 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkm9m\" (UniqueName: \"kubernetes.io/projected/54e26470-5ffb-4673-9375-e80031cc6750-kube-api-access-bkm9m\") pod \"54e26470-5ffb-4673-9375-e80031cc6750\" (UID: \"54e26470-5ffb-4673-9375-e80031cc6750\") " Mar 18 10:03:26.720020 master-0 kubenswrapper[8244]: I0318 10:03:26.718101 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-config" (OuterVolumeSpecName: "config") pod "3a9c36d0-e3f3-441e-bbab-44703a0eeb70" (UID: "3a9c36d0-e3f3-441e-bbab-44703a0eeb70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:03:26.720020 master-0 kubenswrapper[8244]: I0318 10:03:26.719881 8244 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:26.720020 master-0 kubenswrapper[8244]: I0318 10:03:26.719903 8244 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:26.720020 master-0 kubenswrapper[8244]: I0318 10:03:26.719939 8244 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:26.720020 master-0 kubenswrapper[8244]: I0318 10:03:26.719950 8244 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:26.720020 master-0 kubenswrapper[8244]: I0318 10:03:26.719962 8244 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54e26470-5ffb-4673-9375-e80031cc6750-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:26.720815 master-0 kubenswrapper[8244]: I0318 10:03:26.720601 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a9c36d0-e3f3-441e-bbab-44703a0eeb70" (UID: "3a9c36d0-e3f3-441e-bbab-44703a0eeb70"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:03:26.720815 master-0 kubenswrapper[8244]: I0318 10:03:26.720601 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e26470-5ffb-4673-9375-e80031cc6750-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "54e26470-5ffb-4673-9375-e80031cc6750" (UID: "54e26470-5ffb-4673-9375-e80031cc6750"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:03:26.722229 master-0 kubenswrapper[8244]: I0318 10:03:26.722059 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-kube-api-access-6jqfd" (OuterVolumeSpecName: "kube-api-access-6jqfd") pod "3a9c36d0-e3f3-441e-bbab-44703a0eeb70" (UID: "3a9c36d0-e3f3-441e-bbab-44703a0eeb70"). InnerVolumeSpecName "kube-api-access-6jqfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:03:26.723191 master-0 kubenswrapper[8244]: I0318 10:03:26.723070 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e26470-5ffb-4673-9375-e80031cc6750-kube-api-access-bkm9m" (OuterVolumeSpecName: "kube-api-access-bkm9m") pod "54e26470-5ffb-4673-9375-e80031cc6750" (UID: "54e26470-5ffb-4673-9375-e80031cc6750"). InnerVolumeSpecName "kube-api-access-bkm9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:03:26.821071 master-0 kubenswrapper[8244]: I0318 10:03:26.821030 8244 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:26.821071 master-0 kubenswrapper[8244]: I0318 10:03:26.821061 8244 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54e26470-5ffb-4673-9375-e80031cc6750-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:26.821071 master-0 kubenswrapper[8244]: I0318 10:03:26.821071 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jqfd\" (UniqueName: \"kubernetes.io/projected/3a9c36d0-e3f3-441e-bbab-44703a0eeb70-kube-api-access-6jqfd\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:26.821071 master-0 kubenswrapper[8244]: I0318 10:03:26.821082 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkm9m\" (UniqueName: \"kubernetes.io/projected/54e26470-5ffb-4673-9375-e80031cc6750-kube-api-access-bkm9m\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:27.459293 master-0 kubenswrapper[8244]: I0318 10:03:27.459199 8244 generic.go:334] "Generic (PLEG): container finished" podID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerID="37124343fb8209ca549ff671c560cfcd2f841cdc0b622af9f05faea1d0440b44" exitCode=0 Mar 18 10:03:27.459553 master-0 kubenswrapper[8244]: I0318 10:03:27.459277 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" event={"ID":"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4","Type":"ContainerDied","Data":"37124343fb8209ca549ff671c560cfcd2f841cdc0b622af9f05faea1d0440b44"} Mar 18 10:03:27.462077 master-0 kubenswrapper[8244]: I0318 10:03:27.462022 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" event={"ID":"54e26470-5ffb-4673-9375-e80031cc6750","Type":"ContainerDied","Data":"bc2b518f5588a6b282272226db84509d9098206fb841d766ca2a81d956bdb25e"} Mar 18 10:03:27.462272 master-0 kubenswrapper[8244]: I0318 10:03:27.462079 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7" Mar 18 10:03:27.462272 master-0 kubenswrapper[8244]: I0318 10:03:27.462096 8244 scope.go:117] "RemoveContainer" containerID="a3602e50826c30fb0a6aafc5be0e48c4b539e69bcb2efce748d1524de14ad2a2" Mar 18 10:03:27.465159 master-0 kubenswrapper[8244]: I0318 10:03:27.465099 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" Mar 18 10:03:27.466090 master-0 kubenswrapper[8244]: I0318 10:03:27.466023 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr" event={"ID":"3a9c36d0-e3f3-441e-bbab-44703a0eeb70","Type":"ContainerDied","Data":"5e145a875bcfd693a2d0eada78d480516e66f2586ddfa00ba2cc3fc84918f220"} Mar 18 10:03:27.486466 master-0 kubenswrapper[8244]: I0318 10:03:27.486381 8244 scope.go:117] "RemoveContainer" containerID="e686ddb757c595904ac6ebc397e0c0f4d654d782c019f30f1e1bf1e5f427b30d" Mar 18 10:03:27.543158 master-0 kubenswrapper[8244]: I0318 10:03:27.531603 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7"] Mar 18 10:03:27.546407 master-0 kubenswrapper[8244]: I0318 10:03:27.546146 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f8f5f6bc4-87dt7"] Mar 18 10:03:27.549901 master-0 kubenswrapper[8244]: I0318 10:03:27.549800 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68"] Mar 18 10:03:27.551284 master-0 kubenswrapper[8244]: E0318 10:03:27.550247 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9c36d0-e3f3-441e-bbab-44703a0eeb70" containerName="route-controller-manager" Mar 18 10:03:27.551284 master-0 kubenswrapper[8244]: I0318 10:03:27.550279 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9c36d0-e3f3-441e-bbab-44703a0eeb70" containerName="route-controller-manager" Mar 18 10:03:27.551284 master-0 kubenswrapper[8244]: E0318 10:03:27.550307 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" Mar 18 10:03:27.551284 master-0 kubenswrapper[8244]: I0318 10:03:27.550322 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" Mar 18 10:03:27.551284 master-0 kubenswrapper[8244]: I0318 10:03:27.550550 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" Mar 18 10:03:27.551284 master-0 kubenswrapper[8244]: I0318 10:03:27.550577 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a9c36d0-e3f3-441e-bbab-44703a0eeb70" containerName="route-controller-manager" Mar 18 10:03:27.551284 master-0 kubenswrapper[8244]: I0318 10:03:27.550592 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" Mar 18 10:03:27.551284 master-0 kubenswrapper[8244]: I0318 10:03:27.551284 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.553874 master-0 kubenswrapper[8244]: I0318 10:03:27.552878 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 10:03:27.554009 master-0 kubenswrapper[8244]: I0318 10:03:27.553885 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9"] Mar 18 10:03:27.554301 master-0 kubenswrapper[8244]: E0318 10:03:27.554211 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" Mar 18 10:03:27.554301 master-0 kubenswrapper[8244]: I0318 10:03:27.554230 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 10:03:27.554475 master-0 kubenswrapper[8244]: I0318 10:03:27.554375 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-c5mc5" Mar 18 10:03:27.554475 master-0 kubenswrapper[8244]: I0318 10:03:27.554374 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 10:03:27.554475 master-0 kubenswrapper[8244]: I0318 10:03:27.554236 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e26470-5ffb-4673-9375-e80031cc6750" containerName="controller-manager" Mar 18 10:03:27.555930 master-0 kubenswrapper[8244]: I0318 10:03:27.555067 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 10:03:27.555930 master-0 kubenswrapper[8244]: I0318 10:03:27.555209 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.556122 master-0 kubenswrapper[8244]: I0318 10:03:27.555313 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 10:03:27.564312 master-0 kubenswrapper[8244]: I0318 10:03:27.564263 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-czmbt" Mar 18 10:03:27.565099 master-0 kubenswrapper[8244]: I0318 10:03:27.565011 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 10:03:27.565099 master-0 kubenswrapper[8244]: I0318 10:03:27.565060 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 10:03:27.565420 master-0 kubenswrapper[8244]: I0318 10:03:27.565114 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 10:03:27.565420 master-0 kubenswrapper[8244]: I0318 10:03:27.565022 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 10:03:27.565420 master-0 kubenswrapper[8244]: I0318 10:03:27.565356 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 10:03:27.595083 master-0 kubenswrapper[8244]: I0318 10:03:27.595013 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 10:03:27.598517 master-0 kubenswrapper[8244]: I0318 10:03:27.597569 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9"] Mar 18 10:03:27.623713 master-0 kubenswrapper[8244]: I0318 10:03:27.623640 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68"] Mar 18 10:03:27.627698 master-0 kubenswrapper[8244]: I0318 10:03:27.627670 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr"] Mar 18 10:03:27.631459 master-0 kubenswrapper[8244]: I0318 10:03:27.631396 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54cf6885f8-xsgcr"] Mar 18 10:03:27.633082 master-0 kubenswrapper[8244]: I0318 10:03:27.633042 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.633082 master-0 kubenswrapper[8244]: I0318 10:03:27.633081 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.633229 master-0 kubenswrapper[8244]: I0318 10:03:27.633147 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.633229 master-0 kubenswrapper[8244]: I0318 10:03:27.633177 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.633229 master-0 kubenswrapper[8244]: I0318 10:03:27.633224 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.633413 master-0 kubenswrapper[8244]: I0318 10:03:27.633245 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.633413 master-0 kubenswrapper[8244]: I0318 10:03:27.633275 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x46bf\" (UniqueName: \"kubernetes.io/projected/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-kube-api-access-x46bf\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.633413 master-0 kubenswrapper[8244]: I0318 10:03:27.633314 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.633413 master-0 kubenswrapper[8244]: I0318 10:03:27.633335 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpk5h\" (UniqueName: \"kubernetes.io/projected/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-kube-api-access-gpk5h\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.734727 master-0 kubenswrapper[8244]: I0318 10:03:27.734531 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.734727 master-0 kubenswrapper[8244]: I0318 10:03:27.734587 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.736042 master-0 kubenswrapper[8244]: I0318 10:03:27.735595 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.736042 master-0 kubenswrapper[8244]: I0318 10:03:27.735664 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.736042 master-0 kubenswrapper[8244]: I0318 10:03:27.735700 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.736407 master-0 kubenswrapper[8244]: I0318 10:03:27.736172 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.736407 master-0 kubenswrapper[8244]: I0318 10:03:27.736255 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.736407 master-0 kubenswrapper[8244]: I0318 10:03:27.736319 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x46bf\" (UniqueName: \"kubernetes.io/projected/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-kube-api-access-x46bf\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.736765 master-0 kubenswrapper[8244]: I0318 10:03:27.736426 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.736765 master-0 kubenswrapper[8244]: I0318 10:03:27.736492 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpk5h\" (UniqueName: \"kubernetes.io/projected/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-kube-api-access-gpk5h\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.738008 master-0 kubenswrapper[8244]: I0318 10:03:27.737954 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.738156 master-0 kubenswrapper[8244]: I0318 10:03:27.738074 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.738326 master-0 kubenswrapper[8244]: I0318 10:03:27.738195 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.738692 master-0 kubenswrapper[8244]: I0318 10:03:27.738651 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.739738 master-0 kubenswrapper[8244]: I0318 10:03:27.739694 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.741080 master-0 kubenswrapper[8244]: I0318 10:03:27.741023 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a9c36d0-e3f3-441e-bbab-44703a0eeb70" path="/var/lib/kubelet/pods/3a9c36d0-e3f3-441e-bbab-44703a0eeb70/volumes" Mar 18 10:03:27.741226 master-0 kubenswrapper[8244]: I0318 10:03:27.741179 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.747215 master-0 kubenswrapper[8244]: I0318 10:03:27.747145 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e26470-5ffb-4673-9375-e80031cc6750" path="/var/lib/kubelet/pods/54e26470-5ffb-4673-9375-e80031cc6750/volumes" Mar 18 10:03:27.756946 master-0 kubenswrapper[8244]: I0318 10:03:27.756870 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x46bf\" (UniqueName: \"kubernetes.io/projected/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-kube-api-access-x46bf\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:27.761481 master-0 kubenswrapper[8244]: I0318 10:03:27.761444 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpk5h\" (UniqueName: \"kubernetes.io/projected/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-kube-api-access-gpk5h\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.905290 master-0 kubenswrapper[8244]: I0318 10:03:27.905197 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:27.923675 master-0 kubenswrapper[8244]: I0318 10:03:27.923591 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:28.901964 master-0 kubenswrapper[8244]: I0318 10:03:28.901372 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9"] Mar 18 10:03:28.908977 master-0 kubenswrapper[8244]: W0318 10:03:28.908905 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fc664ff_2e8f_441d_82dc_8f21c1d362d7.slice/crio-2b738d6ab8a2079028f3f1e5804df92e50d8884090bb1653ec14e4d63a6afccd WatchSource:0}: Error finding container 2b738d6ab8a2079028f3f1e5804df92e50d8884090bb1653ec14e4d63a6afccd: Status 404 returned error can't find the container with id 2b738d6ab8a2079028f3f1e5804df92e50d8884090bb1653ec14e4d63a6afccd Mar 18 10:03:28.913072 master-0 kubenswrapper[8244]: W0318 10:03:28.912514 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ef5f9ee_b76a_4d53_9e3f_e25f4e11d33d.slice/crio-b588169f9714563a6db5379251857ae747425b95554009dbd48c296b2e82b297 WatchSource:0}: Error finding container b588169f9714563a6db5379251857ae747425b95554009dbd48c296b2e82b297: Status 404 returned error can't find the container with id b588169f9714563a6db5379251857ae747425b95554009dbd48c296b2e82b297 Mar 18 10:03:28.923539 master-0 kubenswrapper[8244]: I0318 10:03:28.923471 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68"] Mar 18 10:03:29.492481 master-0 kubenswrapper[8244]: I0318 10:03:29.492326 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" event={"ID":"9fc664ff-2e8f-441d-82dc-8f21c1d362d7","Type":"ContainerStarted","Data":"6959115a6f11e9fd2881ca4214b94da71213aad3f3ef00ebec36ed62d0816399"} Mar 18 10:03:29.492481 master-0 kubenswrapper[8244]: I0318 10:03:29.492415 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" event={"ID":"9fc664ff-2e8f-441d-82dc-8f21c1d362d7","Type":"ContainerStarted","Data":"2b738d6ab8a2079028f3f1e5804df92e50d8884090bb1653ec14e4d63a6afccd"} Mar 18 10:03:29.492906 master-0 kubenswrapper[8244]: I0318 10:03:29.492852 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:29.496458 master-0 kubenswrapper[8244]: I0318 10:03:29.496405 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" event={"ID":"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d","Type":"ContainerStarted","Data":"ef56f38c2bc505e5fbc078e115510767e1b06d3c1193709a420591be902fdca8"} Mar 18 10:03:29.496458 master-0 kubenswrapper[8244]: I0318 10:03:29.496460 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" event={"ID":"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d","Type":"ContainerStarted","Data":"b588169f9714563a6db5379251857ae747425b95554009dbd48c296b2e82b297"} Mar 18 10:03:29.496797 master-0 kubenswrapper[8244]: I0318 10:03:29.496776 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:29.497628 master-0 kubenswrapper[8244]: I0318 10:03:29.497582 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:03:29.515433 master-0 kubenswrapper[8244]: I0318 10:03:29.515357 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:03:29.554553 master-0 kubenswrapper[8244]: I0318 10:03:29.554457 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" podStartSLOduration=3.554431353 podStartE2EDuration="3.554431353s" podCreationTimestamp="2026-03-18 10:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:03:29.55234617 +0000 UTC m=+526.032082298" watchObservedRunningTime="2026-03-18 10:03:29.554431353 +0000 UTC m=+526.034167491" Mar 18 10:03:29.671258 master-0 kubenswrapper[8244]: I0318 10:03:29.671050 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" podStartSLOduration=3.671023037 podStartE2EDuration="3.671023037s" podCreationTimestamp="2026-03-18 10:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:03:29.666536044 +0000 UTC m=+526.146272192" watchObservedRunningTime="2026-03-18 10:03:29.671023037 +0000 UTC m=+526.150759175" Mar 18 10:03:30.710293 master-0 kubenswrapper[8244]: I0318 10:03:30.710226 8244 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:03:30.710893 master-0 kubenswrapper[8244]: I0318 10:03:30.710724 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager" containerID="cri-o://addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c" gracePeriod=30 Mar 18 10:03:30.710969 master-0 kubenswrapper[8244]: I0318 10:03:30.710903 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1" gracePeriod=30 Mar 18 10:03:30.711035 master-0 kubenswrapper[8244]: I0318 10:03:30.710894 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="cluster-policy-controller" containerID="cri-o://81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3" gracePeriod=30 Mar 18 10:03:30.711121 master-0 kubenswrapper[8244]: I0318 10:03:30.710864 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f" gracePeriod=30 Mar 18 10:03:30.716674 master-0 kubenswrapper[8244]: I0318 10:03:30.715889 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:03:30.717100 master-0 kubenswrapper[8244]: E0318 10:03:30.717031 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager-cert-syncer" Mar 18 10:03:30.717100 master-0 kubenswrapper[8244]: I0318 10:03:30.717077 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager-cert-syncer" Mar 18 10:03:30.717218 master-0 kubenswrapper[8244]: E0318 10:03:30.717122 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="cluster-policy-controller" Mar 18 10:03:30.717218 master-0 kubenswrapper[8244]: I0318 10:03:30.717141 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="cluster-policy-controller" Mar 18 10:03:30.717218 master-0 kubenswrapper[8244]: E0318 10:03:30.717208 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager-recovery-controller" Mar 18 10:03:30.717345 master-0 kubenswrapper[8244]: I0318 10:03:30.717228 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager-recovery-controller" Mar 18 10:03:30.717345 master-0 kubenswrapper[8244]: E0318 10:03:30.717271 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager" Mar 18 10:03:30.717345 master-0 kubenswrapper[8244]: I0318 10:03:30.717288 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager" Mar 18 10:03:30.719518 master-0 kubenswrapper[8244]: I0318 10:03:30.719027 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager" Mar 18 10:03:30.719518 master-0 kubenswrapper[8244]: I0318 10:03:30.719119 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager-cert-syncer" Mar 18 10:03:30.719518 master-0 kubenswrapper[8244]: I0318 10:03:30.719176 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="cluster-policy-controller" Mar 18 10:03:30.719518 master-0 kubenswrapper[8244]: I0318 10:03:30.719240 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82be17f9a809bd5efbd88c0026e8713" containerName="kube-controller-manager-recovery-controller" Mar 18 10:03:30.778586 master-0 kubenswrapper[8244]: I0318 10:03:30.778517 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"af8e875368eec13e995ea08015e08c42\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:30.778849 master-0 kubenswrapper[8244]: I0318 10:03:30.778741 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"af8e875368eec13e995ea08015e08c42\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:30.880230 master-0 kubenswrapper[8244]: I0318 10:03:30.880164 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"af8e875368eec13e995ea08015e08c42\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:30.880432 master-0 kubenswrapper[8244]: I0318 10:03:30.880396 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"af8e875368eec13e995ea08015e08c42\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:30.880488 master-0 kubenswrapper[8244]: I0318 10:03:30.880443 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"af8e875368eec13e995ea08015e08c42\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:30.880536 master-0 kubenswrapper[8244]: I0318 10:03:30.880384 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"af8e875368eec13e995ea08015e08c42\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:31.521948 master-0 kubenswrapper[8244]: I0318 10:03:31.520176 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_b82be17f9a809bd5efbd88c0026e8713/kube-controller-manager-cert-syncer/0.log" Mar 18 10:03:31.521948 master-0 kubenswrapper[8244]: I0318 10:03:31.521517 8244 generic.go:334] "Generic (PLEG): container finished" podID="b82be17f9a809bd5efbd88c0026e8713" containerID="f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f" exitCode=0 Mar 18 10:03:31.521948 master-0 kubenswrapper[8244]: I0318 10:03:31.521564 8244 generic.go:334] "Generic (PLEG): container finished" podID="b82be17f9a809bd5efbd88c0026e8713" containerID="5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1" exitCode=2 Mar 18 10:03:31.521948 master-0 kubenswrapper[8244]: I0318 10:03:31.521634 8244 generic.go:334] "Generic (PLEG): container finished" podID="b82be17f9a809bd5efbd88c0026e8713" containerID="addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c" exitCode=0 Mar 18 10:03:32.032404 master-0 kubenswrapper[8244]: I0318 10:03:32.032362 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_b82be17f9a809bd5efbd88c0026e8713/kube-controller-manager-cert-syncer/0.log" Mar 18 10:03:32.033609 master-0 kubenswrapper[8244]: I0318 10:03:32.033576 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:32.039146 master-0 kubenswrapper[8244]: I0318 10:03:32.039076 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="b82be17f9a809bd5efbd88c0026e8713" podUID="af8e875368eec13e995ea08015e08c42" Mar 18 10:03:32.098323 master-0 kubenswrapper[8244]: I0318 10:03:32.098269 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-cert-dir\") pod \"b82be17f9a809bd5efbd88c0026e8713\" (UID: \"b82be17f9a809bd5efbd88c0026e8713\") " Mar 18 10:03:32.098323 master-0 kubenswrapper[8244]: I0318 10:03:32.098331 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-resource-dir\") pod \"b82be17f9a809bd5efbd88c0026e8713\" (UID: \"b82be17f9a809bd5efbd88c0026e8713\") " Mar 18 10:03:32.098557 master-0 kubenswrapper[8244]: I0318 10:03:32.098386 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "b82be17f9a809bd5efbd88c0026e8713" (UID: "b82be17f9a809bd5efbd88c0026e8713"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:03:32.098557 master-0 kubenswrapper[8244]: I0318 10:03:32.098456 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b82be17f9a809bd5efbd88c0026e8713" (UID: "b82be17f9a809bd5efbd88c0026e8713"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:03:32.098628 master-0 kubenswrapper[8244]: I0318 10:03:32.098578 8244 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:32.098628 master-0 kubenswrapper[8244]: I0318 10:03:32.098591 8244 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b82be17f9a809bd5efbd88c0026e8713-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:32.529944 master-0 kubenswrapper[8244]: I0318 10:03:32.529858 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_b82be17f9a809bd5efbd88c0026e8713/kube-controller-manager-cert-syncer/0.log" Mar 18 10:03:32.530903 master-0 kubenswrapper[8244]: I0318 10:03:32.530779 8244 generic.go:334] "Generic (PLEG): container finished" podID="b82be17f9a809bd5efbd88c0026e8713" containerID="81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3" exitCode=0 Mar 18 10:03:32.530903 master-0 kubenswrapper[8244]: I0318 10:03:32.530859 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:32.531565 master-0 kubenswrapper[8244]: I0318 10:03:32.530861 8244 scope.go:117] "RemoveContainer" containerID="f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f" Mar 18 10:03:32.533258 master-0 kubenswrapper[8244]: I0318 10:03:32.533190 8244 generic.go:334] "Generic (PLEG): container finished" podID="346d6f79-a9bd-4097-abe7-b68830aa2e84" containerID="974b6ae008035f16bd3f106b986b5975e658b69a9a1e106bd2d280e49e6fba6d" exitCode=0 Mar 18 10:03:32.533258 master-0 kubenswrapper[8244]: I0318 10:03:32.533231 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"346d6f79-a9bd-4097-abe7-b68830aa2e84","Type":"ContainerDied","Data":"974b6ae008035f16bd3f106b986b5975e658b69a9a1e106bd2d280e49e6fba6d"} Mar 18 10:03:32.536161 master-0 kubenswrapper[8244]: I0318 10:03:32.536113 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="b82be17f9a809bd5efbd88c0026e8713" podUID="af8e875368eec13e995ea08015e08c42" Mar 18 10:03:32.551687 master-0 kubenswrapper[8244]: I0318 10:03:32.551647 8244 scope.go:117] "RemoveContainer" containerID="5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1" Mar 18 10:03:32.568763 master-0 kubenswrapper[8244]: I0318 10:03:32.563556 8244 scope.go:117] "RemoveContainer" containerID="81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3" Mar 18 10:03:32.580488 master-0 kubenswrapper[8244]: I0318 10:03:32.580441 8244 scope.go:117] "RemoveContainer" containerID="addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c" Mar 18 10:03:32.594102 master-0 kubenswrapper[8244]: I0318 10:03:32.594059 8244 scope.go:117] "RemoveContainer" containerID="f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f" Mar 18 10:03:32.594413 master-0 kubenswrapper[8244]: E0318 10:03:32.594387 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f\": container with ID starting with f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f not found: ID does not exist" containerID="f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f" Mar 18 10:03:32.594413 master-0 kubenswrapper[8244]: I0318 10:03:32.594417 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f"} err="failed to get container status \"f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f\": rpc error: code = NotFound desc = could not find container \"f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f\": container with ID starting with f87bef1b940bda1d2e7547c8dcb5e5796be97a1d493b8ed2c6e88695b761685f not found: ID does not exist" Mar 18 10:03:32.594562 master-0 kubenswrapper[8244]: I0318 10:03:32.594440 8244 scope.go:117] "RemoveContainer" containerID="5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1" Mar 18 10:03:32.594990 master-0 kubenswrapper[8244]: E0318 10:03:32.594761 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1\": container with ID starting with 5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1 not found: ID does not exist" containerID="5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1" Mar 18 10:03:32.594990 master-0 kubenswrapper[8244]: I0318 10:03:32.594816 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1"} err="failed to get container status \"5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1\": rpc error: code = NotFound desc = could not find container \"5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1\": container with ID starting with 5cbeab69e9a6a1b3e6913474059c61b2d058adacd52d3e32e751d6880d4458a1 not found: ID does not exist" Mar 18 10:03:32.594990 master-0 kubenswrapper[8244]: I0318 10:03:32.594872 8244 scope.go:117] "RemoveContainer" containerID="81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3" Mar 18 10:03:32.595228 master-0 kubenswrapper[8244]: E0318 10:03:32.595206 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3\": container with ID starting with 81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3 not found: ID does not exist" containerID="81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3" Mar 18 10:03:32.595290 master-0 kubenswrapper[8244]: I0318 10:03:32.595230 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3"} err="failed to get container status \"81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3\": rpc error: code = NotFound desc = could not find container \"81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3\": container with ID starting with 81ce0ae9fb472bc21ba6f7e4abbc9bd3b5ebc1fd28497178cf7c458689b0f9d3 not found: ID does not exist" Mar 18 10:03:32.595290 master-0 kubenswrapper[8244]: I0318 10:03:32.595244 8244 scope.go:117] "RemoveContainer" containerID="addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c" Mar 18 10:03:32.595984 master-0 kubenswrapper[8244]: E0318 10:03:32.595952 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c\": container with ID starting with addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c not found: ID does not exist" containerID="addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c" Mar 18 10:03:32.596055 master-0 kubenswrapper[8244]: I0318 10:03:32.595986 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c"} err="failed to get container status \"addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c\": rpc error: code = NotFound desc = could not find container \"addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c\": container with ID starting with addfa727884c409aa1d45cf9824fc92e862d34a63cb7525a44148f0f4014c92c not found: ID does not exist" Mar 18 10:03:32.633031 master-0 kubenswrapper[8244]: I0318 10:03:32.632907 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="b82be17f9a809bd5efbd88c0026e8713" podUID="af8e875368eec13e995ea08015e08c42" Mar 18 10:03:32.641853 master-0 kubenswrapper[8244]: I0318 10:03:32.641779 8244 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:03:32.642089 master-0 kubenswrapper[8244]: I0318 10:03:32.642043 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:03:32.642369 master-0 kubenswrapper[8244]: I0318 10:03:32.642290 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" containerID="cri-o://7c521115ddea902792bf48e852856b512a5618ac1e205481b00a57548b627114" gracePeriod=30 Mar 18 10:03:32.642490 master-0 kubenswrapper[8244]: E0318 10:03:32.642314 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 18 10:03:32.642545 master-0 kubenswrapper[8244]: I0318 10:03:32.642505 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 18 10:03:32.642545 master-0 kubenswrapper[8244]: I0318 10:03:32.642043 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" containerID="cri-o://98158274131e9b1c448b325fae48722d74ef93130547141c9b0a75c46c204334" gracePeriod=30 Mar 18 10:03:32.642651 master-0 kubenswrapper[8244]: E0318 10:03:32.642569 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 18 10:03:32.642651 master-0 kubenswrapper[8244]: I0318 10:03:32.642584 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 18 10:03:32.642651 master-0 kubenswrapper[8244]: E0318 10:03:32.642617 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 18 10:03:32.642651 master-0 kubenswrapper[8244]: I0318 10:03:32.642635 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 18 10:03:32.642849 master-0 kubenswrapper[8244]: E0318 10:03:32.642675 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 18 10:03:32.642849 master-0 kubenswrapper[8244]: I0318 10:03:32.642688 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 18 10:03:32.643044 master-0 kubenswrapper[8244]: I0318 10:03:32.643009 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 18 10:03:32.643044 master-0 kubenswrapper[8244]: I0318 10:03:32.643031 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 18 10:03:32.643162 master-0 kubenswrapper[8244]: I0318 10:03:32.643047 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 18 10:03:32.643423 master-0 kubenswrapper[8244]: I0318 10:03:32.643381 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" containerID="cri-o://cd19f6008d757f0df145410d19ef8a0a4892b1a9570868a0f25d4db947985c0d" gracePeriod=30 Mar 18 10:03:32.654125 master-0 kubenswrapper[8244]: I0318 10:03:32.653922 8244 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 18 10:03:32.654125 master-0 kubenswrapper[8244]: I0318 10:03:32.654001 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 18 10:03:32.707033 master-0 kubenswrapper[8244]: I0318 10:03:32.706643 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:32.707033 master-0 kubenswrapper[8244]: I0318 10:03:32.706724 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:32.808521 master-0 kubenswrapper[8244]: I0318 10:03:32.808395 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:32.808521 master-0 kubenswrapper[8244]: I0318 10:03:32.808485 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:32.808521 master-0 kubenswrapper[8244]: I0318 10:03:32.808502 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:32.809114 master-0 kubenswrapper[8244]: I0318 10:03:32.808595 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:32.865306 master-0 kubenswrapper[8244]: I0318 10:03:32.865203 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 18 10:03:32.868339 master-0 kubenswrapper[8244]: I0318 10:03:32.866183 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:32.877409 master-0 kubenswrapper[8244]: I0318 10:03:32.876024 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 18 10:03:32.910015 master-0 kubenswrapper[8244]: I0318 10:03:32.909886 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"8413125cf444e5c95f023c5dd9c6151e\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " Mar 18 10:03:32.910015 master-0 kubenswrapper[8244]: I0318 10:03:32.910009 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8413125cf444e5c95f023c5dd9c6151e" (UID: "8413125cf444e5c95f023c5dd9c6151e"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:03:32.910382 master-0 kubenswrapper[8244]: I0318 10:03:32.910108 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"8413125cf444e5c95f023c5dd9c6151e\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " Mar 18 10:03:32.910382 master-0 kubenswrapper[8244]: I0318 10:03:32.910226 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8413125cf444e5c95f023c5dd9c6151e" (UID: "8413125cf444e5c95f023c5dd9c6151e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:03:32.910510 master-0 kubenswrapper[8244]: I0318 10:03:32.910399 8244 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:32.910510 master-0 kubenswrapper[8244]: I0318 10:03:32.910416 8244 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:33.547199 master-0 kubenswrapper[8244]: I0318 10:03:33.547084 8244 generic.go:334] "Generic (PLEG): container finished" podID="1c62ceda-5e7e-4392-83b9-0d80856e1a96" containerID="64fd17a4dc869dbbdd2a4f39ac14053290f921c096dddb0c79f7bc300e3e1965" exitCode=0 Mar 18 10:03:33.550240 master-0 kubenswrapper[8244]: I0318 10:03:33.547214 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1c62ceda-5e7e-4392-83b9-0d80856e1a96","Type":"ContainerDied","Data":"64fd17a4dc869dbbdd2a4f39ac14053290f921c096dddb0c79f7bc300e3e1965"} Mar 18 10:03:33.553298 master-0 kubenswrapper[8244]: I0318 10:03:33.553243 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 18 10:03:33.554467 master-0 kubenswrapper[8244]: I0318 10:03:33.554394 8244 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="cd19f6008d757f0df145410d19ef8a0a4892b1a9570868a0f25d4db947985c0d" exitCode=0 Mar 18 10:03:33.554467 master-0 kubenswrapper[8244]: I0318 10:03:33.554444 8244 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="7c521115ddea902792bf48e852856b512a5618ac1e205481b00a57548b627114" exitCode=2 Mar 18 10:03:33.554467 master-0 kubenswrapper[8244]: I0318 10:03:33.554467 8244 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="98158274131e9b1c448b325fae48722d74ef93130547141c9b0a75c46c204334" exitCode=0 Mar 18 10:03:33.554862 master-0 kubenswrapper[8244]: I0318 10:03:33.554495 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:33.554862 master-0 kubenswrapper[8244]: I0318 10:03:33.554606 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5bf3da90da776e6c122f127625565a6fdc3ad79ed5366d030c0c0ccb65f53d0" Mar 18 10:03:33.582778 master-0 kubenswrapper[8244]: I0318 10:03:33.582290 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 18 10:03:33.743894 master-0 kubenswrapper[8244]: I0318 10:03:33.742995 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8413125cf444e5c95f023c5dd9c6151e" path="/var/lib/kubelet/pods/8413125cf444e5c95f023c5dd9c6151e/volumes" Mar 18 10:03:33.744189 master-0 kubenswrapper[8244]: I0318 10:03:33.743974 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82be17f9a809bd5efbd88c0026e8713" path="/var/lib/kubelet/pods/b82be17f9a809bd5efbd88c0026e8713/volumes" Mar 18 10:03:33.862345 master-0 kubenswrapper[8244]: I0318 10:03:33.862300 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:03:34.024634 master-0 kubenswrapper[8244]: I0318 10:03:34.024567 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/346d6f79-a9bd-4097-abe7-b68830aa2e84-kube-api-access\") pod \"346d6f79-a9bd-4097-abe7-b68830aa2e84\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " Mar 18 10:03:34.024842 master-0 kubenswrapper[8244]: I0318 10:03:34.024680 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-var-lock\") pod \"346d6f79-a9bd-4097-abe7-b68830aa2e84\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " Mar 18 10:03:34.024842 master-0 kubenswrapper[8244]: I0318 10:03:34.024727 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-var-lock" (OuterVolumeSpecName: "var-lock") pod "346d6f79-a9bd-4097-abe7-b68830aa2e84" (UID: "346d6f79-a9bd-4097-abe7-b68830aa2e84"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:03:34.024842 master-0 kubenswrapper[8244]: I0318 10:03:34.024765 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-kubelet-dir\") pod \"346d6f79-a9bd-4097-abe7-b68830aa2e84\" (UID: \"346d6f79-a9bd-4097-abe7-b68830aa2e84\") " Mar 18 10:03:34.024842 master-0 kubenswrapper[8244]: I0318 10:03:34.024799 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "346d6f79-a9bd-4097-abe7-b68830aa2e84" (UID: "346d6f79-a9bd-4097-abe7-b68830aa2e84"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:03:34.025204 master-0 kubenswrapper[8244]: I0318 10:03:34.025165 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:34.025204 master-0 kubenswrapper[8244]: I0318 10:03:34.025197 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/346d6f79-a9bd-4097-abe7-b68830aa2e84-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:34.028444 master-0 kubenswrapper[8244]: I0318 10:03:34.028400 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346d6f79-a9bd-4097-abe7-b68830aa2e84-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "346d6f79-a9bd-4097-abe7-b68830aa2e84" (UID: "346d6f79-a9bd-4097-abe7-b68830aa2e84"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:03:34.126811 master-0 kubenswrapper[8244]: I0318 10:03:34.126683 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/346d6f79-a9bd-4097-abe7-b68830aa2e84-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:34.515437 master-0 kubenswrapper[8244]: E0318 10:03:34.515219 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 10:03:34.517675 master-0 kubenswrapper[8244]: E0318 10:03:34.517572 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 10:03:34.519690 master-0 kubenswrapper[8244]: E0318 10:03:34.519537 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 10:03:34.519690 master-0 kubenswrapper[8244]: E0318 10:03:34.519637 8244 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" podUID="5e971a41-f0bc-4847-9391-6c03dd4185a6" containerName="kube-multus-additional-cni-plugins" Mar 18 10:03:34.564264 master-0 kubenswrapper[8244]: I0318 10:03:34.564186 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:03:34.565058 master-0 kubenswrapper[8244]: I0318 10:03:34.564178 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"346d6f79-a9bd-4097-abe7-b68830aa2e84","Type":"ContainerDied","Data":"44fff1e61adbaef01d35b3cb7a668fee655369026524529c8495c49a8dde5128"} Mar 18 10:03:34.565058 master-0 kubenswrapper[8244]: I0318 10:03:34.564378 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44fff1e61adbaef01d35b3cb7a668fee655369026524529c8495c49a8dde5128" Mar 18 10:03:34.895136 master-0 kubenswrapper[8244]: I0318 10:03:34.895054 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:34.952791 master-0 kubenswrapper[8244]: I0318 10:03:34.952692 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-var-lock\") pod \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " Mar 18 10:03:34.952791 master-0 kubenswrapper[8244]: I0318 10:03:34.952763 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kube-api-access\") pod \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " Mar 18 10:03:34.952791 master-0 kubenswrapper[8244]: I0318 10:03:34.952784 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kubelet-dir\") pod \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\" (UID: \"1c62ceda-5e7e-4392-83b9-0d80856e1a96\") " Mar 18 10:03:34.954304 master-0 kubenswrapper[8244]: I0318 10:03:34.954240 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c62ceda-5e7e-4392-83b9-0d80856e1a96" (UID: "1c62ceda-5e7e-4392-83b9-0d80856e1a96"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:03:34.954304 master-0 kubenswrapper[8244]: I0318 10:03:34.954279 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-var-lock" (OuterVolumeSpecName: "var-lock") pod "1c62ceda-5e7e-4392-83b9-0d80856e1a96" (UID: "1c62ceda-5e7e-4392-83b9-0d80856e1a96"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:03:34.965879 master-0 kubenswrapper[8244]: I0318 10:03:34.965201 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c62ceda-5e7e-4392-83b9-0d80856e1a96" (UID: "1c62ceda-5e7e-4392-83b9-0d80856e1a96"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:03:35.054805 master-0 kubenswrapper[8244]: I0318 10:03:35.054727 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:35.054805 master-0 kubenswrapper[8244]: I0318 10:03:35.054807 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:35.055066 master-0 kubenswrapper[8244]: I0318 10:03:35.054859 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c62ceda-5e7e-4392-83b9-0d80856e1a96-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:35.573662 master-0 kubenswrapper[8244]: I0318 10:03:35.573617 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1c62ceda-5e7e-4392-83b9-0d80856e1a96","Type":"ContainerDied","Data":"9b455e2d76fdd49301fe2af949c3adea4b9e18edfc2b50e8b9cd691e2613e68a"} Mar 18 10:03:35.574243 master-0 kubenswrapper[8244]: I0318 10:03:35.574225 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b455e2d76fdd49301fe2af949c3adea4b9e18edfc2b50e8b9cd691e2613e68a" Mar 18 10:03:35.574321 master-0 kubenswrapper[8244]: I0318 10:03:35.573731 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:03:42.732871 master-0 kubenswrapper[8244]: I0318 10:03:42.732750 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:42.766650 master-0 kubenswrapper[8244]: I0318 10:03:42.766580 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2ce3e0e9-8812-4091-b259-5b6f2e9299b8" Mar 18 10:03:42.766650 master-0 kubenswrapper[8244]: I0318 10:03:42.766633 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2ce3e0e9-8812-4091-b259-5b6f2e9299b8" Mar 18 10:03:42.789350 master-0 kubenswrapper[8244]: I0318 10:03:42.789275 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:03:42.793603 master-0 kubenswrapper[8244]: I0318 10:03:42.793524 8244 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:42.798538 master-0 kubenswrapper[8244]: I0318 10:03:42.798476 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:03:42.812180 master-0 kubenswrapper[8244]: I0318 10:03:42.812118 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:42.818575 master-0 kubenswrapper[8244]: I0318 10:03:42.818515 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:03:42.846027 master-0 kubenswrapper[8244]: W0318 10:03:42.845962 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf8e875368eec13e995ea08015e08c42.slice/crio-b4eb3bb67999d4fed39987c312beb2bc06f47fac3b7fcdfdc48994c77752b8ad WatchSource:0}: Error finding container b4eb3bb67999d4fed39987c312beb2bc06f47fac3b7fcdfdc48994c77752b8ad: Status 404 returned error can't find the container with id b4eb3bb67999d4fed39987c312beb2bc06f47fac3b7fcdfdc48994c77752b8ad Mar 18 10:03:43.648432 master-0 kubenswrapper[8244]: I0318 10:03:43.648389 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"922d668e986d6aa98fbec9295267ac1f43fd0061254b070e0f57e9b922e66793"} Mar 18 10:03:43.648595 master-0 kubenswrapper[8244]: I0318 10:03:43.648441 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"b5440fd92f867438da48c59f39988e512f02a0b7141abc1139ed7de105e95766"} Mar 18 10:03:43.648595 master-0 kubenswrapper[8244]: I0318 10:03:43.648455 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"b4eb3bb67999d4fed39987c312beb2bc06f47fac3b7fcdfdc48994c77752b8ad"} Mar 18 10:03:44.051604 master-0 kubenswrapper[8244]: I0318 10:03:44.051544 8244 scope.go:117] "RemoveContainer" containerID="98158274131e9b1c448b325fae48722d74ef93130547141c9b0a75c46c204334" Mar 18 10:03:44.068327 master-0 kubenswrapper[8244]: I0318 10:03:44.068286 8244 scope.go:117] "RemoveContainer" containerID="459fcfb70fb899949af51fd621c6c7e3b1b5510c468c992c115b7f0303ef5eb8" Mar 18 10:03:44.087058 master-0 kubenswrapper[8244]: I0318 10:03:44.087011 8244 scope.go:117] "RemoveContainer" containerID="cd19f6008d757f0df145410d19ef8a0a4892b1a9570868a0f25d4db947985c0d" Mar 18 10:03:44.106934 master-0 kubenswrapper[8244]: I0318 10:03:44.106809 8244 scope.go:117] "RemoveContainer" containerID="7c521115ddea902792bf48e852856b512a5618ac1e205481b00a57548b627114" Mar 18 10:03:44.120307 master-0 kubenswrapper[8244]: I0318 10:03:44.120250 8244 scope.go:117] "RemoveContainer" containerID="47003cd7242b25a319c29a44ee35ea3c35fda83145ceddfb4905fe01131e1a69" Mar 18 10:03:44.514514 master-0 kubenswrapper[8244]: E0318 10:03:44.514344 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 10:03:44.517197 master-0 kubenswrapper[8244]: E0318 10:03:44.517114 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 10:03:44.519282 master-0 kubenswrapper[8244]: E0318 10:03:44.519212 8244 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 10:03:44.519392 master-0 kubenswrapper[8244]: E0318 10:03:44.519290 8244 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" podUID="5e971a41-f0bc-4847-9391-6c03dd4185a6" containerName="kube-multus-additional-cni-plugins" Mar 18 10:03:44.663280 master-0 kubenswrapper[8244]: I0318 10:03:44.663200 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"eeb871e8e559b9fd82b985e8a38853c6cc1a0962899e9d61d0017f002e610d41"} Mar 18 10:03:44.663280 master-0 kubenswrapper[8244]: I0318 10:03:44.663269 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"8a062b1b85a12fd918c3c62a85847e5a60612517f0ee750aabe64bd125668daf"} Mar 18 10:03:44.737885 master-0 kubenswrapper[8244]: I0318 10:03:44.736084 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:44.756410 master-0 kubenswrapper[8244]: I0318 10:03:44.756345 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="9aa0b42e-7b58-44e3-894a-a1fa7116c7e5" Mar 18 10:03:44.756410 master-0 kubenswrapper[8244]: I0318 10:03:44.756382 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="9aa0b42e-7b58-44e3-894a-a1fa7116c7e5" Mar 18 10:03:44.770944 master-0 kubenswrapper[8244]: I0318 10:03:44.770760 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.7707352370000002 podStartE2EDuration="2.770735237s" podCreationTimestamp="2026-03-18 10:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:03:44.700104817 +0000 UTC m=+541.179841035" watchObservedRunningTime="2026-03-18 10:03:44.770735237 +0000 UTC m=+541.250471375" Mar 18 10:03:44.771924 master-0 kubenswrapper[8244]: I0318 10:03:44.771892 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:03:44.775770 master-0 kubenswrapper[8244]: I0318 10:03:44.775720 8244 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:44.778262 master-0 kubenswrapper[8244]: I0318 10:03:44.778208 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:03:44.788555 master-0 kubenswrapper[8244]: I0318 10:03:44.788492 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:44.789296 master-0 kubenswrapper[8244]: I0318 10:03:44.789258 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:03:44.807066 master-0 kubenswrapper[8244]: W0318 10:03:44.807007 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e27b7d086edf5d2cf47b703574641d8.slice/crio-175a7f574cdd0bb033854cd54eafd3c786bd342ffc7ec8cd013b6215f3ca1994 WatchSource:0}: Error finding container 175a7f574cdd0bb033854cd54eafd3c786bd342ffc7ec8cd013b6215f3ca1994: Status 404 returned error can't find the container with id 175a7f574cdd0bb033854cd54eafd3c786bd342ffc7ec8cd013b6215f3ca1994 Mar 18 10:03:45.670292 master-0 kubenswrapper[8244]: I0318 10:03:45.670216 8244 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="3e2c362efe2fe8c48b78a8150b0e9484398aa97bf0cb69d78e0777b3495062fc" exitCode=0 Mar 18 10:03:45.670292 master-0 kubenswrapper[8244]: I0318 10:03:45.670267 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerDied","Data":"3e2c362efe2fe8c48b78a8150b0e9484398aa97bf0cb69d78e0777b3495062fc"} Mar 18 10:03:45.671043 master-0 kubenswrapper[8244]: I0318 10:03:45.670327 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"175a7f574cdd0bb033854cd54eafd3c786bd342ffc7ec8cd013b6215f3ca1994"} Mar 18 10:03:46.678268 master-0 kubenswrapper[8244]: I0318 10:03:46.678094 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"504c7c58af279fedab2f56000cc691abf8096faa6bf0c02f961583e20a138ed6"} Mar 18 10:03:46.678268 master-0 kubenswrapper[8244]: I0318 10:03:46.678135 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"e73e9ab6250891a74742cf894dfa6d6f12c07f81c7c6e29abf71445a93b042c6"} Mar 18 10:03:46.678268 master-0 kubenswrapper[8244]: I0318 10:03:46.678146 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"c508677fa84c67b31ad63db19f2ce6332119259b51c9ae7aa95d7b13079c3837"} Mar 18 10:03:46.678268 master-0 kubenswrapper[8244]: I0318 10:03:46.678218 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:03:46.709237 master-0 kubenswrapper[8244]: I0318 10:03:46.709138 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.709117646 podStartE2EDuration="2.709117646s" podCreationTimestamp="2026-03-18 10:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:03:46.705920375 +0000 UTC m=+543.185656503" watchObservedRunningTime="2026-03-18 10:03:46.709117646 +0000 UTC m=+543.188853774" Mar 18 10:03:47.480364 master-0 kubenswrapper[8244]: I0318 10:03:47.480284 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-s8k7j_5e971a41-f0bc-4847-9391-6c03dd4185a6/kube-multus-additional-cni-plugins/0.log" Mar 18 10:03:47.480511 master-0 kubenswrapper[8244]: I0318 10:03:47.480430 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:47.655809 master-0 kubenswrapper[8244]: I0318 10:03:47.655661 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e971a41-f0bc-4847-9391-6c03dd4185a6-tuning-conf-dir\") pod \"5e971a41-f0bc-4847-9391-6c03dd4185a6\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " Mar 18 10:03:47.655809 master-0 kubenswrapper[8244]: I0318 10:03:47.655749 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e971a41-f0bc-4847-9391-6c03dd4185a6-cni-sysctl-allowlist\") pod \"5e971a41-f0bc-4847-9391-6c03dd4185a6\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " Mar 18 10:03:47.656095 master-0 kubenswrapper[8244]: I0318 10:03:47.655891 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6lsx\" (UniqueName: \"kubernetes.io/projected/5e971a41-f0bc-4847-9391-6c03dd4185a6-kube-api-access-w6lsx\") pod \"5e971a41-f0bc-4847-9391-6c03dd4185a6\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " Mar 18 10:03:47.656095 master-0 kubenswrapper[8244]: I0318 10:03:47.655944 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5e971a41-f0bc-4847-9391-6c03dd4185a6-ready\") pod \"5e971a41-f0bc-4847-9391-6c03dd4185a6\" (UID: \"5e971a41-f0bc-4847-9391-6c03dd4185a6\") " Mar 18 10:03:47.656507 master-0 kubenswrapper[8244]: I0318 10:03:47.656459 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e971a41-f0bc-4847-9391-6c03dd4185a6-ready" (OuterVolumeSpecName: "ready") pod "5e971a41-f0bc-4847-9391-6c03dd4185a6" (UID: "5e971a41-f0bc-4847-9391-6c03dd4185a6"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:03:47.656624 master-0 kubenswrapper[8244]: I0318 10:03:47.656555 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e971a41-f0bc-4847-9391-6c03dd4185a6-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "5e971a41-f0bc-4847-9391-6c03dd4185a6" (UID: "5e971a41-f0bc-4847-9391-6c03dd4185a6"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:03:47.656684 master-0 kubenswrapper[8244]: I0318 10:03:47.656649 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e971a41-f0bc-4847-9391-6c03dd4185a6-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "5e971a41-f0bc-4847-9391-6c03dd4185a6" (UID: "5e971a41-f0bc-4847-9391-6c03dd4185a6"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:03:47.661689 master-0 kubenswrapper[8244]: I0318 10:03:47.661621 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e971a41-f0bc-4847-9391-6c03dd4185a6-kube-api-access-w6lsx" (OuterVolumeSpecName: "kube-api-access-w6lsx") pod "5e971a41-f0bc-4847-9391-6c03dd4185a6" (UID: "5e971a41-f0bc-4847-9391-6c03dd4185a6"). InnerVolumeSpecName "kube-api-access-w6lsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:03:47.684435 master-0 kubenswrapper[8244]: I0318 10:03:47.684391 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-s8k7j_5e971a41-f0bc-4847-9391-6c03dd4185a6/kube-multus-additional-cni-plugins/0.log" Mar 18 10:03:47.684915 master-0 kubenswrapper[8244]: I0318 10:03:47.684439 8244 generic.go:334] "Generic (PLEG): container finished" podID="5e971a41-f0bc-4847-9391-6c03dd4185a6" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" exitCode=137 Mar 18 10:03:47.684915 master-0 kubenswrapper[8244]: I0318 10:03:47.684488 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" Mar 18 10:03:47.684915 master-0 kubenswrapper[8244]: I0318 10:03:47.684540 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" event={"ID":"5e971a41-f0bc-4847-9391-6c03dd4185a6","Type":"ContainerDied","Data":"99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16"} Mar 18 10:03:47.684915 master-0 kubenswrapper[8244]: I0318 10:03:47.684564 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-s8k7j" event={"ID":"5e971a41-f0bc-4847-9391-6c03dd4185a6","Type":"ContainerDied","Data":"05391b559584b61eed691de160fd743945d67b3f396cbfb6ffe9983f7f3835e8"} Mar 18 10:03:47.684915 master-0 kubenswrapper[8244]: I0318 10:03:47.684580 8244 scope.go:117] "RemoveContainer" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" Mar 18 10:03:47.704377 master-0 kubenswrapper[8244]: I0318 10:03:47.704313 8244 scope.go:117] "RemoveContainer" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" Mar 18 10:03:47.705206 master-0 kubenswrapper[8244]: E0318 10:03:47.705125 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16\": container with ID starting with 99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16 not found: ID does not exist" containerID="99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16" Mar 18 10:03:47.705297 master-0 kubenswrapper[8244]: I0318 10:03:47.705223 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16"} err="failed to get container status \"99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16\": rpc error: code = NotFound desc = could not find container \"99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16\": container with ID starting with 99b6493be49322ae8ac33a1822af7b2ad1b8cf10cb82aaa72e69bd3bdfa33a16 not found: ID does not exist" Mar 18 10:03:47.727847 master-0 kubenswrapper[8244]: I0318 10:03:47.727761 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-s8k7j"] Mar 18 10:03:47.732057 master-0 kubenswrapper[8244]: I0318 10:03:47.732002 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-s8k7j"] Mar 18 10:03:47.742064 master-0 kubenswrapper[8244]: I0318 10:03:47.742013 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e971a41-f0bc-4847-9391-6c03dd4185a6" path="/var/lib/kubelet/pods/5e971a41-f0bc-4847-9391-6c03dd4185a6/volumes" Mar 18 10:03:47.757593 master-0 kubenswrapper[8244]: I0318 10:03:47.757520 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6lsx\" (UniqueName: \"kubernetes.io/projected/5e971a41-f0bc-4847-9391-6c03dd4185a6-kube-api-access-w6lsx\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:47.757593 master-0 kubenswrapper[8244]: I0318 10:03:47.757573 8244 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/5e971a41-f0bc-4847-9391-6c03dd4185a6-ready\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:47.757593 master-0 kubenswrapper[8244]: I0318 10:03:47.757594 8244 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e971a41-f0bc-4847-9391-6c03dd4185a6-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:47.757931 master-0 kubenswrapper[8244]: I0318 10:03:47.757616 8244 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e971a41-f0bc-4847-9391-6c03dd4185a6-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:52.812788 master-0 kubenswrapper[8244]: I0318 10:03:52.812688 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:52.812788 master-0 kubenswrapper[8244]: I0318 10:03:52.812769 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:52.812788 master-0 kubenswrapper[8244]: I0318 10:03:52.812796 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:52.813791 master-0 kubenswrapper[8244]: I0318 10:03:52.812815 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:52.813791 master-0 kubenswrapper[8244]: I0318 10:03:52.813030 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 10:03:52.813791 master-0 kubenswrapper[8244]: I0318 10:03:52.813100 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 10:03:52.820733 master-0 kubenswrapper[8244]: I0318 10:03:52.820653 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:53.747814 master-0 kubenswrapper[8244]: I0318 10:03:53.747732 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:03:54.242680 master-0 kubenswrapper[8244]: I0318 10:03:54.242612 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 10:03:54.243428 master-0 kubenswrapper[8244]: E0318 10:03:54.243019 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e971a41-f0bc-4847-9391-6c03dd4185a6" containerName="kube-multus-additional-cni-plugins" Mar 18 10:03:54.243428 master-0 kubenswrapper[8244]: I0318 10:03:54.243041 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e971a41-f0bc-4847-9391-6c03dd4185a6" containerName="kube-multus-additional-cni-plugins" Mar 18 10:03:54.243428 master-0 kubenswrapper[8244]: E0318 10:03:54.243081 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346d6f79-a9bd-4097-abe7-b68830aa2e84" containerName="installer" Mar 18 10:03:54.243428 master-0 kubenswrapper[8244]: I0318 10:03:54.243095 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="346d6f79-a9bd-4097-abe7-b68830aa2e84" containerName="installer" Mar 18 10:03:54.243428 master-0 kubenswrapper[8244]: E0318 10:03:54.243127 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c62ceda-5e7e-4392-83b9-0d80856e1a96" containerName="installer" Mar 18 10:03:54.243428 master-0 kubenswrapper[8244]: I0318 10:03:54.243148 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c62ceda-5e7e-4392-83b9-0d80856e1a96" containerName="installer" Mar 18 10:03:54.243428 master-0 kubenswrapper[8244]: I0318 10:03:54.243357 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="346d6f79-a9bd-4097-abe7-b68830aa2e84" containerName="installer" Mar 18 10:03:54.243428 master-0 kubenswrapper[8244]: I0318 10:03:54.243393 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c62ceda-5e7e-4392-83b9-0d80856e1a96" containerName="installer" Mar 18 10:03:54.243428 master-0 kubenswrapper[8244]: I0318 10:03:54.243416 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e971a41-f0bc-4847-9391-6c03dd4185a6" containerName="kube-multus-additional-cni-plugins" Mar 18 10:03:54.244309 master-0 kubenswrapper[8244]: I0318 10:03:54.244267 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.247421 master-0 kubenswrapper[8244]: I0318 10:03:54.247354 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 10:03:54.251483 master-0 kubenswrapper[8244]: I0318 10:03:54.251415 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-244m4" Mar 18 10:03:54.260920 master-0 kubenswrapper[8244]: I0318 10:03:54.260780 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 10:03:54.268115 master-0 kubenswrapper[8244]: I0318 10:03:54.267805 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.268115 master-0 kubenswrapper[8244]: I0318 10:03:54.267909 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.268115 master-0 kubenswrapper[8244]: I0318 10:03:54.267982 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.368892 master-0 kubenswrapper[8244]: I0318 10:03:54.368801 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.369071 master-0 kubenswrapper[8244]: I0318 10:03:54.368906 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.369071 master-0 kubenswrapper[8244]: I0318 10:03:54.368975 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.369071 master-0 kubenswrapper[8244]: I0318 10:03:54.369017 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.369071 master-0 kubenswrapper[8244]: I0318 10:03:54.369052 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.398258 master-0 kubenswrapper[8244]: I0318 10:03:54.398173 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:54.580566 master-0 kubenswrapper[8244]: I0318 10:03:54.580430 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:03:55.057425 master-0 kubenswrapper[8244]: I0318 10:03:55.057359 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 10:03:55.755024 master-0 kubenswrapper[8244]: I0318 10:03:55.754885 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"a13b76d1-aad1-4ca8-8991-2041a4b10c15","Type":"ContainerStarted","Data":"5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb"} Mar 18 10:03:55.755024 master-0 kubenswrapper[8244]: I0318 10:03:55.754973 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"a13b76d1-aad1-4ca8-8991-2041a4b10c15","Type":"ContainerStarted","Data":"c1634d2a2ea9df3e563d807666a5bf99578cccd363e274a20292f790cffc4c74"} Mar 18 10:03:56.770541 master-0 kubenswrapper[8244]: I0318 10:03:56.770370 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-hkzr2_ca4a0040-a638-46fa-a1cb-a19d83a7ebe4/multus-admission-controller/0.log" Mar 18 10:03:56.770541 master-0 kubenswrapper[8244]: I0318 10:03:56.770451 8244 generic.go:334] "Generic (PLEG): container finished" podID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerID="d329cbff3f93c0797d55bbc4989994ef6bde775d852d69c46ec0c0eadff97f83" exitCode=137 Mar 18 10:03:56.771445 master-0 kubenswrapper[8244]: I0318 10:03:56.770516 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" event={"ID":"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4","Type":"ContainerDied","Data":"d329cbff3f93c0797d55bbc4989994ef6bde775d852d69c46ec0c0eadff97f83"} Mar 18 10:03:57.366242 master-0 kubenswrapper[8244]: I0318 10:03:57.366191 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-hkzr2_ca4a0040-a638-46fa-a1cb-a19d83a7ebe4/multus-admission-controller/0.log" Mar 18 10:03:57.366393 master-0 kubenswrapper[8244]: I0318 10:03:57.366262 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 10:03:57.385907 master-0 kubenswrapper[8244]: I0318 10:03:57.385748 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=3.385727146 podStartE2EDuration="3.385727146s" podCreationTimestamp="2026-03-18 10:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:03:55.786389362 +0000 UTC m=+552.266125500" watchObservedRunningTime="2026-03-18 10:03:57.385727146 +0000 UTC m=+553.865463274" Mar 18 10:03:57.512295 master-0 kubenswrapper[8244]: I0318 10:03:57.512230 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkzq9\" (UniqueName: \"kubernetes.io/projected/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-kube-api-access-dkzq9\") pod \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " Mar 18 10:03:57.512485 master-0 kubenswrapper[8244]: I0318 10:03:57.512362 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") pod \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\" (UID: \"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4\") " Mar 18 10:03:57.515169 master-0 kubenswrapper[8244]: I0318 10:03:57.515130 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:03:57.515528 master-0 kubenswrapper[8244]: I0318 10:03:57.515493 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-kube-api-access-dkzq9" (OuterVolumeSpecName: "kube-api-access-dkzq9") pod "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" (UID: "ca4a0040-a638-46fa-a1cb-a19d83a7ebe4"). InnerVolumeSpecName "kube-api-access-dkzq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:03:57.614648 master-0 kubenswrapper[8244]: I0318 10:03:57.614595 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkzq9\" (UniqueName: \"kubernetes.io/projected/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-kube-api-access-dkzq9\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:57.614648 master-0 kubenswrapper[8244]: I0318 10:03:57.614635 8244 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 10:03:57.784424 master-0 kubenswrapper[8244]: I0318 10:03:57.784320 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-hkzr2_ca4a0040-a638-46fa-a1cb-a19d83a7ebe4/multus-admission-controller/0.log" Mar 18 10:03:57.784424 master-0 kubenswrapper[8244]: I0318 10:03:57.784408 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" event={"ID":"ca4a0040-a638-46fa-a1cb-a19d83a7ebe4","Type":"ContainerDied","Data":"09d710db13d778dbf9177c53bdd0bf416b054e571b3f82d139455ca7c45869a9"} Mar 18 10:03:57.785099 master-0 kubenswrapper[8244]: I0318 10:03:57.784460 8244 scope.go:117] "RemoveContainer" containerID="37124343fb8209ca549ff671c560cfcd2f841cdc0b622af9f05faea1d0440b44" Mar 18 10:03:57.785099 master-0 kubenswrapper[8244]: I0318 10:03:57.784654 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2" Mar 18 10:03:57.849846 master-0 kubenswrapper[8244]: I0318 10:03:57.847003 8244 scope.go:117] "RemoveContainer" containerID="d329cbff3f93c0797d55bbc4989994ef6bde775d852d69c46ec0c0eadff97f83" Mar 18 10:03:57.868596 master-0 kubenswrapper[8244]: I0318 10:03:57.863004 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2"] Mar 18 10:03:57.874460 master-0 kubenswrapper[8244]: I0318 10:03:57.874406 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-hkzr2"] Mar 18 10:03:58.796692 master-0 kubenswrapper[8244]: I0318 10:03:58.796629 8244 generic.go:334] "Generic (PLEG): container finished" podID="43d54514-989c-4c82-93f9-153b44eacdd1" containerID="0056d6e24bcc6dc57e3453a9e7f141adeb078909a14a7b6029f52e100df60161" exitCode=0 Mar 18 10:03:58.796692 master-0 kubenswrapper[8244]: I0318 10:03:58.796674 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerDied","Data":"0056d6e24bcc6dc57e3453a9e7f141adeb078909a14a7b6029f52e100df60161"} Mar 18 10:03:58.797352 master-0 kubenswrapper[8244]: I0318 10:03:58.796731 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerStarted","Data":"49d021e4bb5a3483651e863b5f33517771b81ab9615ea08cc7bd4cae097b1d2d"} Mar 18 10:03:58.797352 master-0 kubenswrapper[8244]: I0318 10:03:58.796763 8244 scope.go:117] "RemoveContainer" containerID="83d2d113ec64b26f85c2da77fcf83ffd1c0559babf05a97c582bf5bda8d8a7a5" Mar 18 10:03:59.399632 master-0 kubenswrapper[8244]: I0318 10:03:59.399521 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:03:59.403929 master-0 kubenswrapper[8244]: I0318 10:03:59.403818 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:03:59.403929 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:03:59.403929 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:03:59.403929 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:03:59.404381 master-0 kubenswrapper[8244]: I0318 10:03:59.403966 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:03:59.743861 master-0 kubenswrapper[8244]: I0318 10:03:59.743685 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" path="/var/lib/kubelet/pods/ca4a0040-a638-46fa-a1cb-a19d83a7ebe4/volumes" Mar 18 10:04:00.401105 master-0 kubenswrapper[8244]: I0318 10:04:00.401042 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:00.401105 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:00.401105 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:00.401105 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:00.402274 master-0 kubenswrapper[8244]: I0318 10:04:00.401119 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:00.639941 master-0 kubenswrapper[8244]: I0318 10:04:00.639867 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 10:04:00.640190 master-0 kubenswrapper[8244]: I0318 10:04:00.640144 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="a13b76d1-aad1-4ca8-8991-2041a4b10c15" containerName="installer" containerID="cri-o://5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb" gracePeriod=30 Mar 18 10:04:00.817576 master-0 kubenswrapper[8244]: I0318 10:04:00.817508 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/3.log" Mar 18 10:04:00.818479 master-0 kubenswrapper[8244]: I0318 10:04:00.818409 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/2.log" Mar 18 10:04:00.818989 master-0 kubenswrapper[8244]: I0318 10:04:00.818945 8244 generic.go:334] "Generic (PLEG): container finished" podID="accc57fb-75f5-4f89-9804-6ede7f77e27c" containerID="19028a9b74d8fde675db8214cb7dc59516cd57bb8937a1e369ea219dd5ad277c" exitCode=1 Mar 18 10:04:00.819147 master-0 kubenswrapper[8244]: I0318 10:04:00.819083 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerDied","Data":"19028a9b74d8fde675db8214cb7dc59516cd57bb8937a1e369ea219dd5ad277c"} Mar 18 10:04:00.819223 master-0 kubenswrapper[8244]: I0318 10:04:00.819191 8244 scope.go:117] "RemoveContainer" containerID="0d30b4f631b8eb9dde0a0925230da53e5145662b1505b3eb3b7912145bc9b9d7" Mar 18 10:04:00.819961 master-0 kubenswrapper[8244]: I0318 10:04:00.819930 8244 scope.go:117] "RemoveContainer" containerID="19028a9b74d8fde675db8214cb7dc59516cd57bb8937a1e369ea219dd5ad277c" Mar 18 10:04:00.820322 master-0 kubenswrapper[8244]: E0318 10:04:00.820283 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:04:01.401581 master-0 kubenswrapper[8244]: I0318 10:04:01.401517 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:01.401581 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:01.401581 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:01.401581 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:01.401581 master-0 kubenswrapper[8244]: I0318 10:04:01.401579 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:01.829473 master-0 kubenswrapper[8244]: I0318 10:04:01.829360 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/3.log" Mar 18 10:04:02.402864 master-0 kubenswrapper[8244]: I0318 10:04:02.402580 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:02.402864 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:02.402864 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:02.402864 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:02.403529 master-0 kubenswrapper[8244]: I0318 10:04:02.402964 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:02.819738 master-0 kubenswrapper[8244]: I0318 10:04:02.819680 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:04:02.826746 master-0 kubenswrapper[8244]: I0318 10:04:02.826703 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:04:03.399524 master-0 kubenswrapper[8244]: I0318 10:04:03.399430 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:04:03.401448 master-0 kubenswrapper[8244]: I0318 10:04:03.401369 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:03.401448 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:03.401448 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:03.401448 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:03.401448 master-0 kubenswrapper[8244]: I0318 10:04:03.401432 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:04.401890 master-0 kubenswrapper[8244]: I0318 10:04:04.401766 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:04.401890 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:04.401890 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:04.401890 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:04.401890 master-0 kubenswrapper[8244]: I0318 10:04:04.401883 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:05.239201 master-0 kubenswrapper[8244]: I0318 10:04:05.239112 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 10:04:05.239553 master-0 kubenswrapper[8244]: E0318 10:04:05.239527 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerName="kube-rbac-proxy" Mar 18 10:04:05.239631 master-0 kubenswrapper[8244]: I0318 10:04:05.239555 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerName="kube-rbac-proxy" Mar 18 10:04:05.239631 master-0 kubenswrapper[8244]: E0318 10:04:05.239593 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerName="multus-admission-controller" Mar 18 10:04:05.239631 master-0 kubenswrapper[8244]: I0318 10:04:05.239606 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerName="multus-admission-controller" Mar 18 10:04:05.239821 master-0 kubenswrapper[8244]: I0318 10:04:05.239803 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerName="multus-admission-controller" Mar 18 10:04:05.239935 master-0 kubenswrapper[8244]: I0318 10:04:05.239886 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4a0040-a638-46fa-a1cb-a19d83a7ebe4" containerName="kube-rbac-proxy" Mar 18 10:04:05.240568 master-0 kubenswrapper[8244]: I0318 10:04:05.240524 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.264467 master-0 kubenswrapper[8244]: I0318 10:04:05.264390 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 10:04:05.401325 master-0 kubenswrapper[8244]: I0318 10:04:05.401263 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:05.401325 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:05.401325 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:05.401325 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:05.401693 master-0 kubenswrapper[8244]: I0318 10:04:05.401347 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:05.433912 master-0 kubenswrapper[8244]: I0318 10:04:05.433851 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-var-lock\") pod \"installer-2-master-0\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.434478 master-0 kubenswrapper[8244]: I0318 10:04:05.433936 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.434478 master-0 kubenswrapper[8244]: I0318 10:04:05.434172 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f79f2e47-0828-4a77-b0e8-0e142aa55563-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.536043 master-0 kubenswrapper[8244]: I0318 10:04:05.535929 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f79f2e47-0828-4a77-b0e8-0e142aa55563-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.536531 master-0 kubenswrapper[8244]: I0318 10:04:05.536085 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-var-lock\") pod \"installer-2-master-0\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.536531 master-0 kubenswrapper[8244]: I0318 10:04:05.536184 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.536531 master-0 kubenswrapper[8244]: I0318 10:04:05.536316 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-var-lock\") pod \"installer-2-master-0\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.536531 master-0 kubenswrapper[8244]: I0318 10:04:05.536353 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.564738 master-0 kubenswrapper[8244]: I0318 10:04:05.564676 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f79f2e47-0828-4a77-b0e8-0e142aa55563-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:05.610423 master-0 kubenswrapper[8244]: I0318 10:04:05.610364 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:06.037581 master-0 kubenswrapper[8244]: I0318 10:04:06.035219 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 10:04:06.400571 master-0 kubenswrapper[8244]: I0318 10:04:06.400503 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:06.400571 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:06.400571 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:06.400571 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:06.400908 master-0 kubenswrapper[8244]: I0318 10:04:06.400588 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:06.861973 master-0 kubenswrapper[8244]: I0318 10:04:06.861892 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"f79f2e47-0828-4a77-b0e8-0e142aa55563","Type":"ContainerStarted","Data":"b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6"} Mar 18 10:04:06.862496 master-0 kubenswrapper[8244]: I0318 10:04:06.861984 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"f79f2e47-0828-4a77-b0e8-0e142aa55563","Type":"ContainerStarted","Data":"9cc91b7c35b06302ffa66829b51e40fc464b7f08e391da5d0d245e6ef074aa97"} Mar 18 10:04:06.889169 master-0 kubenswrapper[8244]: I0318 10:04:06.889094 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=1.8890754360000002 podStartE2EDuration="1.889075436s" podCreationTimestamp="2026-03-18 10:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:04:06.883277016 +0000 UTC m=+563.363013154" watchObservedRunningTime="2026-03-18 10:04:06.889075436 +0000 UTC m=+563.368811564" Mar 18 10:04:07.402138 master-0 kubenswrapper[8244]: I0318 10:04:07.402057 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:07.402138 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:07.402138 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:07.402138 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:07.402485 master-0 kubenswrapper[8244]: I0318 10:04:07.402143 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:08.401625 master-0 kubenswrapper[8244]: I0318 10:04:08.401541 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:08.401625 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:08.401625 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:08.401625 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:08.401625 master-0 kubenswrapper[8244]: I0318 10:04:08.401612 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:09.403086 master-0 kubenswrapper[8244]: I0318 10:04:09.403000 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:09.403086 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:09.403086 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:09.403086 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:09.404213 master-0 kubenswrapper[8244]: I0318 10:04:09.403103 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:10.402175 master-0 kubenswrapper[8244]: I0318 10:04:10.402084 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:10.402175 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:10.402175 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:10.402175 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:10.402779 master-0 kubenswrapper[8244]: I0318 10:04:10.402205 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:11.401405 master-0 kubenswrapper[8244]: I0318 10:04:11.401319 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:11.401405 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:11.401405 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:11.401405 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:11.402129 master-0 kubenswrapper[8244]: I0318 10:04:11.401416 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:12.402454 master-0 kubenswrapper[8244]: I0318 10:04:12.402378 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:12.402454 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:12.402454 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:12.402454 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:12.403143 master-0 kubenswrapper[8244]: I0318 10:04:12.402469 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:13.401688 master-0 kubenswrapper[8244]: I0318 10:04:13.401617 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:13.401688 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:13.401688 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:13.401688 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:13.402382 master-0 kubenswrapper[8244]: I0318 10:04:13.402334 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:14.401134 master-0 kubenswrapper[8244]: I0318 10:04:14.401071 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:14.401134 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:14.401134 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:14.401134 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:14.402054 master-0 kubenswrapper[8244]: I0318 10:04:14.401145 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:15.400980 master-0 kubenswrapper[8244]: I0318 10:04:15.400918 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:15.400980 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:15.400980 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:15.400980 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:15.401724 master-0 kubenswrapper[8244]: I0318 10:04:15.400991 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:15.733592 master-0 kubenswrapper[8244]: I0318 10:04:15.733382 8244 scope.go:117] "RemoveContainer" containerID="19028a9b74d8fde675db8214cb7dc59516cd57bb8937a1e369ea219dd5ad277c" Mar 18 10:04:15.733909 master-0 kubenswrapper[8244]: E0318 10:04:15.733634 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:04:16.195362 master-0 kubenswrapper[8244]: I0318 10:04:16.194909 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 10:04:16.196650 master-0 kubenswrapper[8244]: I0318 10:04:16.196618 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.198784 master-0 kubenswrapper[8244]: I0318 10:04:16.198737 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 10:04:16.200709 master-0 kubenswrapper[8244]: I0318 10:04:16.200669 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-zdg6q" Mar 18 10:04:16.207523 master-0 kubenswrapper[8244]: I0318 10:04:16.207474 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 10:04:16.323436 master-0 kubenswrapper[8244]: I0318 10:04:16.323143 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.323436 master-0 kubenswrapper[8244]: I0318 10:04:16.323357 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-var-lock\") pod \"installer-2-master-0\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.323657 master-0 kubenswrapper[8244]: I0318 10:04:16.323541 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.401096 master-0 kubenswrapper[8244]: I0318 10:04:16.401001 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:16.401096 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:16.401096 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:16.401096 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:16.401096 master-0 kubenswrapper[8244]: I0318 10:04:16.401062 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:16.424768 master-0 kubenswrapper[8244]: I0318 10:04:16.424703 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.424981 master-0 kubenswrapper[8244]: I0318 10:04:16.424807 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.424981 master-0 kubenswrapper[8244]: I0318 10:04:16.424882 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.424981 master-0 kubenswrapper[8244]: I0318 10:04:16.424975 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-var-lock\") pod \"installer-2-master-0\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.425076 master-0 kubenswrapper[8244]: I0318 10:04:16.425015 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-var-lock\") pod \"installer-2-master-0\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.441284 master-0 kubenswrapper[8244]: I0318 10:04:16.441242 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.516781 master-0 kubenswrapper[8244]: I0318 10:04:16.516658 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 10:04:16.930840 master-0 kubenswrapper[8244]: W0318 10:04:16.930384 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod87a8662e_66f1_4aee_9344_564bb4ac4f9a.slice/crio-248be0eef87c6987bd3e5849d27bf7120297d80837bfe7be2b2148ea06921d34 WatchSource:0}: Error finding container 248be0eef87c6987bd3e5849d27bf7120297d80837bfe7be2b2148ea06921d34: Status 404 returned error can't find the container with id 248be0eef87c6987bd3e5849d27bf7120297d80837bfe7be2b2148ea06921d34 Mar 18 10:04:16.932864 master-0 kubenswrapper[8244]: I0318 10:04:16.932433 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 10:04:16.938433 master-0 kubenswrapper[8244]: I0318 10:04:16.937107 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"87a8662e-66f1-4aee-9344-564bb4ac4f9a","Type":"ContainerStarted","Data":"248be0eef87c6987bd3e5849d27bf7120297d80837bfe7be2b2148ea06921d34"} Mar 18 10:04:17.401775 master-0 kubenswrapper[8244]: I0318 10:04:17.401688 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:17.401775 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:17.401775 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:17.401775 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:17.402500 master-0 kubenswrapper[8244]: I0318 10:04:17.401778 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:17.942872 master-0 kubenswrapper[8244]: I0318 10:04:17.942803 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"87a8662e-66f1-4aee-9344-564bb4ac4f9a","Type":"ContainerStarted","Data":"9741863ef9844fe110fec368fe8e35a337bceb7feefcd7589421d83a4b33ff81"} Mar 18 10:04:17.986625 master-0 kubenswrapper[8244]: I0318 10:04:17.986524 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=1.986507378 podStartE2EDuration="1.986507378s" podCreationTimestamp="2026-03-18 10:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:04:17.984155257 +0000 UTC m=+574.463891385" watchObservedRunningTime="2026-03-18 10:04:17.986507378 +0000 UTC m=+574.466243506" Mar 18 10:04:18.400902 master-0 kubenswrapper[8244]: I0318 10:04:18.400843 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:18.400902 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:18.400902 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:18.400902 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:18.400902 master-0 kubenswrapper[8244]: I0318 10:04:18.400902 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:19.400936 master-0 kubenswrapper[8244]: I0318 10:04:19.400867 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:19.400936 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:19.400936 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:19.400936 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:19.401509 master-0 kubenswrapper[8244]: I0318 10:04:19.400977 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:20.400720 master-0 kubenswrapper[8244]: I0318 10:04:20.400644 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:20.400720 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:20.400720 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:20.400720 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:20.401304 master-0 kubenswrapper[8244]: I0318 10:04:20.400736 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:21.401697 master-0 kubenswrapper[8244]: I0318 10:04:21.401613 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:21.401697 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:21.401697 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:21.401697 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:21.402422 master-0 kubenswrapper[8244]: I0318 10:04:21.401715 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:22.064848 master-0 kubenswrapper[8244]: I0318 10:04:22.064004 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 10:04:22.064848 master-0 kubenswrapper[8244]: I0318 10:04:22.064397 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="f79f2e47-0828-4a77-b0e8-0e142aa55563" containerName="installer" containerID="cri-o://b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6" gracePeriod=30 Mar 18 10:04:22.402847 master-0 kubenswrapper[8244]: I0318 10:04:22.402673 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:22.402847 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:22.402847 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:22.402847 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:22.402847 master-0 kubenswrapper[8244]: I0318 10:04:22.402764 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:22.515956 master-0 kubenswrapper[8244]: I0318 10:04:22.515911 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_f79f2e47-0828-4a77-b0e8-0e142aa55563/installer/0.log" Mar 18 10:04:22.516115 master-0 kubenswrapper[8244]: I0318 10:04:22.515977 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:22.711361 master-0 kubenswrapper[8244]: I0318 10:04:22.711190 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-var-lock\") pod \"f79f2e47-0828-4a77-b0e8-0e142aa55563\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " Mar 18 10:04:22.711361 master-0 kubenswrapper[8244]: I0318 10:04:22.711315 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-var-lock" (OuterVolumeSpecName: "var-lock") pod "f79f2e47-0828-4a77-b0e8-0e142aa55563" (UID: "f79f2e47-0828-4a77-b0e8-0e142aa55563"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:04:22.711669 master-0 kubenswrapper[8244]: I0318 10:04:22.711419 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-kubelet-dir\") pod \"f79f2e47-0828-4a77-b0e8-0e142aa55563\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " Mar 18 10:04:22.711669 master-0 kubenswrapper[8244]: I0318 10:04:22.711473 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f79f2e47-0828-4a77-b0e8-0e142aa55563" (UID: "f79f2e47-0828-4a77-b0e8-0e142aa55563"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:04:22.711669 master-0 kubenswrapper[8244]: I0318 10:04:22.711513 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f79f2e47-0828-4a77-b0e8-0e142aa55563-kube-api-access\") pod \"f79f2e47-0828-4a77-b0e8-0e142aa55563\" (UID: \"f79f2e47-0828-4a77-b0e8-0e142aa55563\") " Mar 18 10:04:22.712176 master-0 kubenswrapper[8244]: I0318 10:04:22.712115 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:04:22.712176 master-0 kubenswrapper[8244]: I0318 10:04:22.712168 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f79f2e47-0828-4a77-b0e8-0e142aa55563-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:04:22.716006 master-0 kubenswrapper[8244]: I0318 10:04:22.715940 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79f2e47-0828-4a77-b0e8-0e142aa55563-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f79f2e47-0828-4a77-b0e8-0e142aa55563" (UID: "f79f2e47-0828-4a77-b0e8-0e142aa55563"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:04:22.813597 master-0 kubenswrapper[8244]: I0318 10:04:22.813541 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f79f2e47-0828-4a77-b0e8-0e142aa55563-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:04:22.986938 master-0 kubenswrapper[8244]: I0318 10:04:22.986770 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_f79f2e47-0828-4a77-b0e8-0e142aa55563/installer/0.log" Mar 18 10:04:22.986938 master-0 kubenswrapper[8244]: I0318 10:04:22.986853 8244 generic.go:334] "Generic (PLEG): container finished" podID="f79f2e47-0828-4a77-b0e8-0e142aa55563" containerID="b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6" exitCode=1 Mar 18 10:04:22.986938 master-0 kubenswrapper[8244]: I0318 10:04:22.986885 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"f79f2e47-0828-4a77-b0e8-0e142aa55563","Type":"ContainerDied","Data":"b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6"} Mar 18 10:04:22.986938 master-0 kubenswrapper[8244]: I0318 10:04:22.986916 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"f79f2e47-0828-4a77-b0e8-0e142aa55563","Type":"ContainerDied","Data":"9cc91b7c35b06302ffa66829b51e40fc464b7f08e391da5d0d245e6ef074aa97"} Mar 18 10:04:22.986938 master-0 kubenswrapper[8244]: I0318 10:04:22.986925 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 10:04:22.987525 master-0 kubenswrapper[8244]: I0318 10:04:22.986978 8244 scope.go:117] "RemoveContainer" containerID="b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6" Mar 18 10:04:23.009976 master-0 kubenswrapper[8244]: I0318 10:04:23.009923 8244 scope.go:117] "RemoveContainer" containerID="b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6" Mar 18 10:04:23.010536 master-0 kubenswrapper[8244]: E0318 10:04:23.010471 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6\": container with ID starting with b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6 not found: ID does not exist" containerID="b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6" Mar 18 10:04:23.010630 master-0 kubenswrapper[8244]: I0318 10:04:23.010536 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6"} err="failed to get container status \"b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6\": rpc error: code = NotFound desc = could not find container \"b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6\": container with ID starting with b217d141befd6f7b3d6db2018cfa659be0781d7e67bfded5194aa13f197e35f6 not found: ID does not exist" Mar 18 10:04:23.132120 master-0 kubenswrapper[8244]: I0318 10:04:23.132028 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 10:04:23.142428 master-0 kubenswrapper[8244]: I0318 10:04:23.142328 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 10:04:23.401295 master-0 kubenswrapper[8244]: I0318 10:04:23.401217 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:23.401295 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:23.401295 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:23.401295 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:23.401711 master-0 kubenswrapper[8244]: I0318 10:04:23.401305 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:23.696923 master-0 kubenswrapper[8244]: I0318 10:04:23.696743 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 18 10:04:23.697702 master-0 kubenswrapper[8244]: E0318 10:04:23.697457 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79f2e47-0828-4a77-b0e8-0e142aa55563" containerName="installer" Mar 18 10:04:23.697702 master-0 kubenswrapper[8244]: I0318 10:04:23.697490 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79f2e47-0828-4a77-b0e8-0e142aa55563" containerName="installer" Mar 18 10:04:23.697875 master-0 kubenswrapper[8244]: I0318 10:04:23.697792 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79f2e47-0828-4a77-b0e8-0e142aa55563" containerName="installer" Mar 18 10:04:23.698560 master-0 kubenswrapper[8244]: I0318 10:04:23.698523 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:23.701088 master-0 kubenswrapper[8244]: I0318 10:04:23.701038 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-zr9bx" Mar 18 10:04:23.702047 master-0 kubenswrapper[8244]: I0318 10:04:23.702009 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 10:04:23.711321 master-0 kubenswrapper[8244]: I0318 10:04:23.711231 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 18 10:04:23.751552 master-0 kubenswrapper[8244]: I0318 10:04:23.751472 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79f2e47-0828-4a77-b0e8-0e142aa55563" path="/var/lib/kubelet/pods/f79f2e47-0828-4a77-b0e8-0e142aa55563/volumes" Mar 18 10:04:23.829986 master-0 kubenswrapper[8244]: I0318 10:04:23.829869 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kube-api-access\") pod \"installer-6-master-0\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:23.830454 master-0 kubenswrapper[8244]: I0318 10:04:23.830381 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-var-lock\") pod \"installer-6-master-0\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:23.830590 master-0 kubenswrapper[8244]: I0318 10:04:23.830474 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:23.931475 master-0 kubenswrapper[8244]: I0318 10:04:23.931398 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-var-lock\") pod \"installer-6-master-0\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:23.931475 master-0 kubenswrapper[8244]: I0318 10:04:23.931468 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:23.931794 master-0 kubenswrapper[8244]: I0318 10:04:23.931517 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kube-api-access\") pod \"installer-6-master-0\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:23.932621 master-0 kubenswrapper[8244]: I0318 10:04:23.932585 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-var-lock\") pod \"installer-6-master-0\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:23.932744 master-0 kubenswrapper[8244]: I0318 10:04:23.932647 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:23.962455 master-0 kubenswrapper[8244]: I0318 10:04:23.962250 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kube-api-access\") pod \"installer-6-master-0\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:24.035133 master-0 kubenswrapper[8244]: I0318 10:04:24.035054 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:04:24.401424 master-0 kubenswrapper[8244]: I0318 10:04:24.401345 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:24.401424 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:24.401424 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:24.401424 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:24.401863 master-0 kubenswrapper[8244]: I0318 10:04:24.401444 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:24.457750 master-0 kubenswrapper[8244]: I0318 10:04:24.457671 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 18 10:04:24.461897 master-0 kubenswrapper[8244]: W0318 10:04:24.461838 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4ea5939e_5f4d_4028_9384_2ec5710ecdc8.slice/crio-823fdbbda6c3f662c8a7386983ae9bef843b30223cfc80549bf1fe24201c6148 WatchSource:0}: Error finding container 823fdbbda6c3f662c8a7386983ae9bef843b30223cfc80549bf1fe24201c6148: Status 404 returned error can't find the container with id 823fdbbda6c3f662c8a7386983ae9bef843b30223cfc80549bf1fe24201c6148 Mar 18 10:04:25.007162 master-0 kubenswrapper[8244]: I0318 10:04:25.006980 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"4ea5939e-5f4d-4028-9384-2ec5710ecdc8","Type":"ContainerStarted","Data":"ee0f38924448efddd8bd62aa03fafbac2abe2ddc36be4b5eb348dac27bee7be4"} Mar 18 10:04:25.007162 master-0 kubenswrapper[8244]: I0318 10:04:25.007055 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"4ea5939e-5f4d-4028-9384-2ec5710ecdc8","Type":"ContainerStarted","Data":"823fdbbda6c3f662c8a7386983ae9bef843b30223cfc80549bf1fe24201c6148"} Mar 18 10:04:25.033343 master-0 kubenswrapper[8244]: I0318 10:04:25.033212 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-0" podStartSLOduration=2.033183129 podStartE2EDuration="2.033183129s" podCreationTimestamp="2026-03-18 10:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:04:25.031195137 +0000 UTC m=+581.510931285" watchObservedRunningTime="2026-03-18 10:04:25.033183129 +0000 UTC m=+581.512919327" Mar 18 10:04:25.401692 master-0 kubenswrapper[8244]: I0318 10:04:25.401580 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:25.401692 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:25.401692 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:25.401692 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:25.402348 master-0 kubenswrapper[8244]: I0318 10:04:25.401710 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:26.401707 master-0 kubenswrapper[8244]: I0318 10:04:26.401549 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:26.401707 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:26.401707 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:26.401707 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:26.401707 master-0 kubenswrapper[8244]: I0318 10:04:26.401621 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:26.451523 master-0 kubenswrapper[8244]: I0318 10:04:26.451476 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_a13b76d1-aad1-4ca8-8991-2041a4b10c15/installer/0.log" Mar 18 10:04:26.451660 master-0 kubenswrapper[8244]: I0318 10:04:26.451581 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:04:26.586150 master-0 kubenswrapper[8244]: I0318 10:04:26.586071 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-var-lock\") pod \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " Mar 18 10:04:26.586487 master-0 kubenswrapper[8244]: I0318 10:04:26.586197 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-var-lock" (OuterVolumeSpecName: "var-lock") pod "a13b76d1-aad1-4ca8-8991-2041a4b10c15" (UID: "a13b76d1-aad1-4ca8-8991-2041a4b10c15"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:04:26.586487 master-0 kubenswrapper[8244]: I0318 10:04:26.586417 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kubelet-dir\") pod \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " Mar 18 10:04:26.586487 master-0 kubenswrapper[8244]: I0318 10:04:26.586440 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a13b76d1-aad1-4ca8-8991-2041a4b10c15" (UID: "a13b76d1-aad1-4ca8-8991-2041a4b10c15"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:04:26.586487 master-0 kubenswrapper[8244]: I0318 10:04:26.586462 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kube-api-access\") pod \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\" (UID: \"a13b76d1-aad1-4ca8-8991-2041a4b10c15\") " Mar 18 10:04:26.586948 master-0 kubenswrapper[8244]: I0318 10:04:26.586910 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:04:26.586948 master-0 kubenswrapper[8244]: I0318 10:04:26.586931 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a13b76d1-aad1-4ca8-8991-2041a4b10c15-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:04:26.591507 master-0 kubenswrapper[8244]: I0318 10:04:26.591466 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a13b76d1-aad1-4ca8-8991-2041a4b10c15" (UID: "a13b76d1-aad1-4ca8-8991-2041a4b10c15"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:04:26.688493 master-0 kubenswrapper[8244]: I0318 10:04:26.688316 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a13b76d1-aad1-4ca8-8991-2041a4b10c15-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:04:27.029374 master-0 kubenswrapper[8244]: I0318 10:04:27.029133 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_a13b76d1-aad1-4ca8-8991-2041a4b10c15/installer/0.log" Mar 18 10:04:27.029642 master-0 kubenswrapper[8244]: I0318 10:04:27.029580 8244 generic.go:334] "Generic (PLEG): container finished" podID="a13b76d1-aad1-4ca8-8991-2041a4b10c15" containerID="5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb" exitCode=1 Mar 18 10:04:27.029750 master-0 kubenswrapper[8244]: I0318 10:04:27.029638 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"a13b76d1-aad1-4ca8-8991-2041a4b10c15","Type":"ContainerDied","Data":"5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb"} Mar 18 10:04:27.029750 master-0 kubenswrapper[8244]: I0318 10:04:27.029693 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"a13b76d1-aad1-4ca8-8991-2041a4b10c15","Type":"ContainerDied","Data":"c1634d2a2ea9df3e563d807666a5bf99578cccd363e274a20292f790cffc4c74"} Mar 18 10:04:27.029750 master-0 kubenswrapper[8244]: I0318 10:04:27.029726 8244 scope.go:117] "RemoveContainer" containerID="5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb" Mar 18 10:04:27.030021 master-0 kubenswrapper[8244]: I0318 10:04:27.029775 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 10:04:27.059970 master-0 kubenswrapper[8244]: I0318 10:04:27.059808 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 10:04:27.060399 master-0 kubenswrapper[8244]: E0318 10:04:27.060321 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13b76d1-aad1-4ca8-8991-2041a4b10c15" containerName="installer" Mar 18 10:04:27.060399 master-0 kubenswrapper[8244]: I0318 10:04:27.060373 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13b76d1-aad1-4ca8-8991-2041a4b10c15" containerName="installer" Mar 18 10:04:27.060694 master-0 kubenswrapper[8244]: I0318 10:04:27.060633 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="a13b76d1-aad1-4ca8-8991-2041a4b10c15" containerName="installer" Mar 18 10:04:27.075024 master-0 kubenswrapper[8244]: I0318 10:04:27.072743 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.076693 master-0 kubenswrapper[8244]: I0318 10:04:27.075885 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-244m4" Mar 18 10:04:27.076978 master-0 kubenswrapper[8244]: I0318 10:04:27.076885 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 10:04:27.090927 master-0 kubenswrapper[8244]: I0318 10:04:27.085399 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 10:04:27.116277 master-0 kubenswrapper[8244]: I0318 10:04:27.095771 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-var-lock\") pod \"installer-3-master-0\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.116277 master-0 kubenswrapper[8244]: I0318 10:04:27.096057 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.116277 master-0 kubenswrapper[8244]: I0318 10:04:27.096268 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2610d88e-f450-455a-9db5-dc59c1d97bf4-kube-api-access\") pod \"installer-3-master-0\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.116277 master-0 kubenswrapper[8244]: I0318 10:04:27.099490 8244 scope.go:117] "RemoveContainer" containerID="5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb" Mar 18 10:04:27.116277 master-0 kubenswrapper[8244]: E0318 10:04:27.103272 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb\": container with ID starting with 5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb not found: ID does not exist" containerID="5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb" Mar 18 10:04:27.116277 master-0 kubenswrapper[8244]: I0318 10:04:27.103374 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb"} err="failed to get container status \"5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb\": rpc error: code = NotFound desc = could not find container \"5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb\": container with ID starting with 5207aff9b2a59447b37e83be01595dae8c33a9575960fb1ab22f24275dd785cb not found: ID does not exist" Mar 18 10:04:27.116277 master-0 kubenswrapper[8244]: I0318 10:04:27.115676 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 10:04:27.124366 master-0 kubenswrapper[8244]: I0318 10:04:27.123134 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 10:04:27.197286 master-0 kubenswrapper[8244]: I0318 10:04:27.197219 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.197286 master-0 kubenswrapper[8244]: I0318 10:04:27.197276 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2610d88e-f450-455a-9db5-dc59c1d97bf4-kube-api-access\") pod \"installer-3-master-0\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.197516 master-0 kubenswrapper[8244]: I0318 10:04:27.197303 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-var-lock\") pod \"installer-3-master-0\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.197516 master-0 kubenswrapper[8244]: I0318 10:04:27.197379 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-var-lock\") pod \"installer-3-master-0\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.197516 master-0 kubenswrapper[8244]: I0318 10:04:27.197422 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.216994 master-0 kubenswrapper[8244]: I0318 10:04:27.216946 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2610d88e-f450-455a-9db5-dc59c1d97bf4-kube-api-access\") pod \"installer-3-master-0\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.401323 master-0 kubenswrapper[8244]: I0318 10:04:27.401251 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:27.401323 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:27.401323 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:27.401323 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:27.401777 master-0 kubenswrapper[8244]: I0318 10:04:27.401345 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:27.424189 master-0 kubenswrapper[8244]: I0318 10:04:27.424098 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:27.744547 master-0 kubenswrapper[8244]: I0318 10:04:27.744384 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a13b76d1-aad1-4ca8-8991-2041a4b10c15" path="/var/lib/kubelet/pods/a13b76d1-aad1-4ca8-8991-2041a4b10c15/volumes" Mar 18 10:04:27.933064 master-0 kubenswrapper[8244]: I0318 10:04:27.933001 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 10:04:27.942302 master-0 kubenswrapper[8244]: W0318 10:04:27.942000 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2610d88e_f450_455a_9db5_dc59c1d97bf4.slice/crio-e3402d97d5cd0c562a44a7222ad82fa96c21a20426f7aec38a099e12bb0d5c81 WatchSource:0}: Error finding container e3402d97d5cd0c562a44a7222ad82fa96c21a20426f7aec38a099e12bb0d5c81: Status 404 returned error can't find the container with id e3402d97d5cd0c562a44a7222ad82fa96c21a20426f7aec38a099e12bb0d5c81 Mar 18 10:04:28.050624 master-0 kubenswrapper[8244]: I0318 10:04:28.050584 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"2610d88e-f450-455a-9db5-dc59c1d97bf4","Type":"ContainerStarted","Data":"e3402d97d5cd0c562a44a7222ad82fa96c21a20426f7aec38a099e12bb0d5c81"} Mar 18 10:04:28.400038 master-0 kubenswrapper[8244]: I0318 10:04:28.399944 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:28.400038 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:28.400038 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:28.400038 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:28.400456 master-0 kubenswrapper[8244]: I0318 10:04:28.400392 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:29.068419 master-0 kubenswrapper[8244]: I0318 10:04:29.068342 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"2610d88e-f450-455a-9db5-dc59c1d97bf4","Type":"ContainerStarted","Data":"53abf8c46c4806e802330543e52578e77a18490ade1e7a702b54871790c5701b"} Mar 18 10:04:29.136232 master-0 kubenswrapper[8244]: I0318 10:04:29.136111 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 10:04:29.139878 master-0 kubenswrapper[8244]: I0318 10:04:29.139759 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.139726327 podStartE2EDuration="2.139726327s" podCreationTimestamp="2026-03-18 10:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:04:29.134785309 +0000 UTC m=+585.614521447" watchObservedRunningTime="2026-03-18 10:04:29.139726327 +0000 UTC m=+585.619462485" Mar 18 10:04:29.141932 master-0 kubenswrapper[8244]: I0318 10:04:29.141874 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.145229 master-0 kubenswrapper[8244]: I0318 10:04:29.145190 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 10:04:29.145472 master-0 kubenswrapper[8244]: I0318 10:04:29.145227 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-76rsr" Mar 18 10:04:29.178025 master-0 kubenswrapper[8244]: I0318 10:04:29.177967 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 10:04:29.227998 master-0 kubenswrapper[8244]: I0318 10:04:29.227891 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.228320 master-0 kubenswrapper[8244]: I0318 10:04:29.228072 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-var-lock\") pod \"installer-4-master-0\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.228320 master-0 kubenswrapper[8244]: I0318 10:04:29.228246 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.329738 master-0 kubenswrapper[8244]: I0318 10:04:29.329573 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.329998 master-0 kubenswrapper[8244]: I0318 10:04:29.329831 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.329998 master-0 kubenswrapper[8244]: I0318 10:04:29.329895 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.329998 master-0 kubenswrapper[8244]: I0318 10:04:29.329916 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-var-lock\") pod \"installer-4-master-0\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.329998 master-0 kubenswrapper[8244]: I0318 10:04:29.329949 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-var-lock\") pod \"installer-4-master-0\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.344955 master-0 kubenswrapper[8244]: I0318 10:04:29.344880 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.401270 master-0 kubenswrapper[8244]: I0318 10:04:29.401190 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:29.401270 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:29.401270 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:29.401270 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:29.401676 master-0 kubenswrapper[8244]: I0318 10:04:29.401283 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:29.480472 master-0 kubenswrapper[8244]: I0318 10:04:29.480388 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:04:29.745379 master-0 kubenswrapper[8244]: I0318 10:04:29.745277 8244 scope.go:117] "RemoveContainer" containerID="19028a9b74d8fde675db8214cb7dc59516cd57bb8937a1e369ea219dd5ad277c" Mar 18 10:04:29.745918 master-0 kubenswrapper[8244]: E0318 10:04:29.745807 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:04:29.976795 master-0 kubenswrapper[8244]: I0318 10:04:29.976728 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 10:04:29.980117 master-0 kubenswrapper[8244]: W0318 10:04:29.980058 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod449dc8b3_72b7_4be5_b5ab_ed4d632f52b2.slice/crio-ab6781799773a4bd269941acef201c1236103b10079655748dd8db69e5953242 WatchSource:0}: Error finding container ab6781799773a4bd269941acef201c1236103b10079655748dd8db69e5953242: Status 404 returned error can't find the container with id ab6781799773a4bd269941acef201c1236103b10079655748dd8db69e5953242 Mar 18 10:04:30.080494 master-0 kubenswrapper[8244]: I0318 10:04:30.080297 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2","Type":"ContainerStarted","Data":"ab6781799773a4bd269941acef201c1236103b10079655748dd8db69e5953242"} Mar 18 10:04:30.401168 master-0 kubenswrapper[8244]: I0318 10:04:30.401078 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:30.401168 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:30.401168 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:30.401168 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:30.401939 master-0 kubenswrapper[8244]: I0318 10:04:30.401223 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:31.089231 master-0 kubenswrapper[8244]: I0318 10:04:31.089169 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2","Type":"ContainerStarted","Data":"01fbb9d5ae86373a51c41f3a5e60d86ed2cd0a315f2ae635082fa660578bf765"} Mar 18 10:04:31.114042 master-0 kubenswrapper[8244]: I0318 10:04:31.113959 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.113941335 podStartE2EDuration="2.113941335s" podCreationTimestamp="2026-03-18 10:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:04:31.1133523 +0000 UTC m=+587.593088428" watchObservedRunningTime="2026-03-18 10:04:31.113941335 +0000 UTC m=+587.593677463" Mar 18 10:04:31.401618 master-0 kubenswrapper[8244]: I0318 10:04:31.401470 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:31.401618 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:31.401618 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:31.401618 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:31.401618 master-0 kubenswrapper[8244]: I0318 10:04:31.401555 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:32.402610 master-0 kubenswrapper[8244]: I0318 10:04:32.402516 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:32.402610 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:32.402610 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:32.402610 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:32.403816 master-0 kubenswrapper[8244]: I0318 10:04:32.402613 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:33.401788 master-0 kubenswrapper[8244]: I0318 10:04:33.401722 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:33.401788 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:33.401788 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:33.401788 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:33.402532 master-0 kubenswrapper[8244]: I0318 10:04:33.402492 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:34.402386 master-0 kubenswrapper[8244]: I0318 10:04:34.402328 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:34.402386 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:34.402386 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:34.402386 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:34.403517 master-0 kubenswrapper[8244]: I0318 10:04:34.403473 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:34.793693 master-0 kubenswrapper[8244]: I0318 10:04:34.793626 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:04:35.402144 master-0 kubenswrapper[8244]: I0318 10:04:35.402090 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:35.402144 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:35.402144 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:35.402144 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:35.403289 master-0 kubenswrapper[8244]: I0318 10:04:35.403206 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:36.402433 master-0 kubenswrapper[8244]: I0318 10:04:36.402319 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:36.402433 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:36.402433 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:36.402433 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:36.402433 master-0 kubenswrapper[8244]: I0318 10:04:36.402408 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:37.400297 master-0 kubenswrapper[8244]: I0318 10:04:37.400234 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:37.400297 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:37.400297 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:37.400297 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:37.400549 master-0 kubenswrapper[8244]: I0318 10:04:37.400306 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:38.401281 master-0 kubenswrapper[8244]: I0318 10:04:38.401222 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:38.401281 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:38.401281 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:38.401281 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:38.402516 master-0 kubenswrapper[8244]: I0318 10:04:38.401307 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:39.403267 master-0 kubenswrapper[8244]: I0318 10:04:39.403154 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:39.403267 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:39.403267 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:39.403267 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:39.404463 master-0 kubenswrapper[8244]: I0318 10:04:39.403274 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:40.401572 master-0 kubenswrapper[8244]: I0318 10:04:40.401419 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:40.401572 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:40.401572 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:40.401572 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:40.401572 master-0 kubenswrapper[8244]: I0318 10:04:40.401562 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:41.402068 master-0 kubenswrapper[8244]: I0318 10:04:41.402027 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:41.402068 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:41.402068 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:41.402068 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:41.402766 master-0 kubenswrapper[8244]: I0318 10:04:41.402738 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:42.401317 master-0 kubenswrapper[8244]: I0318 10:04:42.401240 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:42.401317 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:42.401317 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:42.401317 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:42.401683 master-0 kubenswrapper[8244]: I0318 10:04:42.401344 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:42.436959 master-0 kubenswrapper[8244]: I0318 10:04:42.435775 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 10:04:42.436959 master-0 kubenswrapper[8244]: I0318 10:04:42.436485 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-3-master-0" podUID="2610d88e-f450-455a-9db5-dc59c1d97bf4" containerName="installer" containerID="cri-o://53abf8c46c4806e802330543e52578e77a18490ade1e7a702b54871790c5701b" gracePeriod=30 Mar 18 10:04:43.401938 master-0 kubenswrapper[8244]: I0318 10:04:43.401848 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:43.401938 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:43.401938 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:43.401938 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:43.402360 master-0 kubenswrapper[8244]: I0318 10:04:43.401968 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:44.402177 master-0 kubenswrapper[8244]: I0318 10:04:44.402054 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:44.402177 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:44.402177 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:44.402177 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:44.403993 master-0 kubenswrapper[8244]: I0318 10:04:44.403203 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:44.734168 master-0 kubenswrapper[8244]: I0318 10:04:44.734043 8244 scope.go:117] "RemoveContainer" containerID="19028a9b74d8fde675db8214cb7dc59516cd57bb8937a1e369ea219dd5ad277c" Mar 18 10:04:45.218898 master-0 kubenswrapper[8244]: I0318 10:04:45.217509 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/3.log" Mar 18 10:04:45.218898 master-0 kubenswrapper[8244]: I0318 10:04:45.217783 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerStarted","Data":"ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c"} Mar 18 10:04:45.401933 master-0 kubenswrapper[8244]: I0318 10:04:45.401392 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:45.401933 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:45.401933 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:45.401933 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:45.401933 master-0 kubenswrapper[8244]: I0318 10:04:45.401476 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:45.844609 master-0 kubenswrapper[8244]: I0318 10:04:45.844510 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 10:04:45.846290 master-0 kubenswrapper[8244]: I0318 10:04:45.846251 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:45.864381 master-0 kubenswrapper[8244]: I0318 10:04:45.864322 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 10:04:45.920295 master-0 kubenswrapper[8244]: I0318 10:04:45.920207 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db95c5-2017-4b04-b11c-9844947c5be9-kube-api-access\") pod \"installer-4-master-0\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:45.920551 master-0 kubenswrapper[8244]: I0318 10:04:45.920365 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-var-lock\") pod \"installer-4-master-0\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:45.920551 master-0 kubenswrapper[8244]: I0318 10:04:45.920423 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:46.021427 master-0 kubenswrapper[8244]: I0318 10:04:46.021351 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db95c5-2017-4b04-b11c-9844947c5be9-kube-api-access\") pod \"installer-4-master-0\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:46.021669 master-0 kubenswrapper[8244]: I0318 10:04:46.021508 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-var-lock\") pod \"installer-4-master-0\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:46.021669 master-0 kubenswrapper[8244]: I0318 10:04:46.021573 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:46.021763 master-0 kubenswrapper[8244]: I0318 10:04:46.021691 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-var-lock\") pod \"installer-4-master-0\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:46.021811 master-0 kubenswrapper[8244]: I0318 10:04:46.021726 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:46.043052 master-0 kubenswrapper[8244]: I0318 10:04:46.042976 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db95c5-2017-4b04-b11c-9844947c5be9-kube-api-access\") pod \"installer-4-master-0\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:46.186588 master-0 kubenswrapper[8244]: I0318 10:04:46.186456 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:04:46.400977 master-0 kubenswrapper[8244]: I0318 10:04:46.400915 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:46.400977 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:46.400977 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:46.400977 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:46.401240 master-0 kubenswrapper[8244]: I0318 10:04:46.400993 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:46.646567 master-0 kubenswrapper[8244]: I0318 10:04:46.646389 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 10:04:47.244115 master-0 kubenswrapper[8244]: I0318 10:04:47.243949 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"90db95c5-2017-4b04-b11c-9844947c5be9","Type":"ContainerStarted","Data":"84fe69ce9654e0f778c53fad94cc55da3a405c4d3f78319e40a6e7f4b1d02966"} Mar 18 10:04:47.244115 master-0 kubenswrapper[8244]: I0318 10:04:47.244011 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"90db95c5-2017-4b04-b11c-9844947c5be9","Type":"ContainerStarted","Data":"2b33ec4b21a843e83059f3a27a8bc8244c587a53368b1233d2c8ea0115ce547d"} Mar 18 10:04:47.270271 master-0 kubenswrapper[8244]: I0318 10:04:47.270192 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.270167873 podStartE2EDuration="2.270167873s" podCreationTimestamp="2026-03-18 10:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:04:47.266167469 +0000 UTC m=+603.745903637" watchObservedRunningTime="2026-03-18 10:04:47.270167873 +0000 UTC m=+603.749904021" Mar 18 10:04:47.401711 master-0 kubenswrapper[8244]: I0318 10:04:47.401659 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:47.401711 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:47.401711 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:47.401711 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:47.401987 master-0 kubenswrapper[8244]: I0318 10:04:47.401737 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:48.300486 master-0 kubenswrapper[8244]: I0318 10:04:48.300422 8244 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 10:04:48.301871 master-0 kubenswrapper[8244]: I0318 10:04:48.301722 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" containerID="cri-o://86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb" gracePeriod=30 Mar 18 10:04:48.301871 master-0 kubenswrapper[8244]: I0318 10:04:48.301814 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" containerID="cri-o://42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf" gracePeriod=30 Mar 18 10:04:48.302124 master-0 kubenswrapper[8244]: I0318 10:04:48.301899 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" containerID="cri-o://b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac" gracePeriod=30 Mar 18 10:04:48.302124 master-0 kubenswrapper[8244]: I0318 10:04:48.301927 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" containerID="cri-o://a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c" gracePeriod=30 Mar 18 10:04:48.302124 master-0 kubenswrapper[8244]: I0318 10:04:48.301899 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" containerID="cri-o://8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2" gracePeriod=30 Mar 18 10:04:48.307729 master-0 kubenswrapper[8244]: I0318 10:04:48.307677 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 10:04:48.308151 master-0 kubenswrapper[8244]: E0318 10:04:48.308110 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 10:04:48.308151 master-0 kubenswrapper[8244]: I0318 10:04:48.308138 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: E0318 10:04:48.308164 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: I0318 10:04:48.308178 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: E0318 10:04:48.308192 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: I0318 10:04:48.308207 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: E0318 10:04:48.308222 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: I0318 10:04:48.308235 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: E0318 10:04:48.308260 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: I0318 10:04:48.308272 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: E0318 10:04:48.308293 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: I0318 10:04:48.308306 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: E0318 10:04:48.308329 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: I0318 10:04:48.308341 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: E0318 10:04:48.308355 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 10:04:48.308476 master-0 kubenswrapper[8244]: I0318 10:04:48.308367 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 10:04:48.309379 master-0 kubenswrapper[8244]: I0318 10:04:48.308572 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 10:04:48.309379 master-0 kubenswrapper[8244]: I0318 10:04:48.308593 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 10:04:48.309379 master-0 kubenswrapper[8244]: I0318 10:04:48.308611 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 10:04:48.309379 master-0 kubenswrapper[8244]: I0318 10:04:48.308634 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 10:04:48.309379 master-0 kubenswrapper[8244]: I0318 10:04:48.308654 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 10:04:48.369215 master-0 kubenswrapper[8244]: I0318 10:04:48.369128 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.369417 master-0 kubenswrapper[8244]: I0318 10:04:48.369329 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.369541 master-0 kubenswrapper[8244]: I0318 10:04:48.369423 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.369646 master-0 kubenswrapper[8244]: I0318 10:04:48.369569 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.369646 master-0 kubenswrapper[8244]: I0318 10:04:48.369606 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.369892 master-0 kubenswrapper[8244]: I0318 10:04:48.369652 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.401946 master-0 kubenswrapper[8244]: I0318 10:04:48.401881 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:48.401946 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:48.401946 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:48.401946 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:48.402277 master-0 kubenswrapper[8244]: I0318 10:04:48.401979 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:48.471153 master-0 kubenswrapper[8244]: I0318 10:04:48.471079 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471324 master-0 kubenswrapper[8244]: I0318 10:04:48.471154 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471324 master-0 kubenswrapper[8244]: I0318 10:04:48.471229 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471324 master-0 kubenswrapper[8244]: I0318 10:04:48.471245 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471461 master-0 kubenswrapper[8244]: I0318 10:04:48.471331 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471461 master-0 kubenswrapper[8244]: I0318 10:04:48.471332 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471461 master-0 kubenswrapper[8244]: I0318 10:04:48.471373 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471461 master-0 kubenswrapper[8244]: I0318 10:04:48.471429 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471461 master-0 kubenswrapper[8244]: I0318 10:04:48.471439 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471645 master-0 kubenswrapper[8244]: I0318 10:04:48.471474 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471645 master-0 kubenswrapper[8244]: I0318 10:04:48.471556 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.471645 master-0 kubenswrapper[8244]: I0318 10:04:48.471593 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:04:48.922751 master-0 kubenswrapper[8244]: I0318 10:04:48.922691 8244 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Mar 18 10:04:48.923210 master-0 kubenswrapper[8244]: I0318 10:04:48.923161 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Mar 18 10:04:49.260044 master-0 kubenswrapper[8244]: I0318 10:04:49.259750 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 10:04:49.262299 master-0 kubenswrapper[8244]: I0318 10:04:49.262237 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 10:04:49.265434 master-0 kubenswrapper[8244]: I0318 10:04:49.265358 8244 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf" exitCode=2 Mar 18 10:04:49.265434 master-0 kubenswrapper[8244]: I0318 10:04:49.265400 8244 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c" exitCode=0 Mar 18 10:04:49.265649 master-0 kubenswrapper[8244]: I0318 10:04:49.265448 8244 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac" exitCode=2 Mar 18 10:04:49.402212 master-0 kubenswrapper[8244]: I0318 10:04:49.402127 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:49.402212 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:49.402212 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:49.402212 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:49.403387 master-0 kubenswrapper[8244]: I0318 10:04:49.402230 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:50.401758 master-0 kubenswrapper[8244]: I0318 10:04:50.401684 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:50.401758 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:50.401758 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:50.401758 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:50.402176 master-0 kubenswrapper[8244]: I0318 10:04:50.401791 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:51.400884 master-0 kubenswrapper[8244]: I0318 10:04:51.400801 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:51.400884 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:51.400884 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:51.400884 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:51.401547 master-0 kubenswrapper[8244]: I0318 10:04:51.400906 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:52.401964 master-0 kubenswrapper[8244]: I0318 10:04:52.401779 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:52.401964 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:52.401964 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:52.401964 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:52.401964 master-0 kubenswrapper[8244]: I0318 10:04:52.401922 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:53.401873 master-0 kubenswrapper[8244]: I0318 10:04:53.401694 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:53.401873 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:53.401873 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:53.401873 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:53.401873 master-0 kubenswrapper[8244]: I0318 10:04:53.401795 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:54.402505 master-0 kubenswrapper[8244]: I0318 10:04:54.402397 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:54.402505 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:54.402505 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:54.402505 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:54.402505 master-0 kubenswrapper[8244]: I0318 10:04:54.402489 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:55.401954 master-0 kubenswrapper[8244]: I0318 10:04:55.401875 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:55.401954 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:55.401954 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:55.401954 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:55.402393 master-0 kubenswrapper[8244]: I0318 10:04:55.401973 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:56.402137 master-0 kubenswrapper[8244]: I0318 10:04:56.402062 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:56.402137 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:56.402137 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:56.402137 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:56.402762 master-0 kubenswrapper[8244]: I0318 10:04:56.402199 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:57.401707 master-0 kubenswrapper[8244]: I0318 10:04:57.401618 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:57.401707 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:57.401707 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:57.401707 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:57.402728 master-0 kubenswrapper[8244]: I0318 10:04:57.401731 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:58.401937 master-0 kubenswrapper[8244]: I0318 10:04:58.401856 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:58.401937 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:58.401937 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:58.401937 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:58.402964 master-0 kubenswrapper[8244]: I0318 10:04:58.401952 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:59.155138 master-0 kubenswrapper[8244]: E0318 10:04:59.155034 8244 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod2610d88e_f450_455a_9db5_dc59c1d97bf4.slice/crio-53abf8c46c4806e802330543e52578e77a18490ade1e7a702b54871790c5701b.scope\": RecentStats: unable to find data in memory cache]" Mar 18 10:04:59.155406 master-0 kubenswrapper[8244]: E0318 10:04:59.155034 8244 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod2610d88e_f450_455a_9db5_dc59c1d97bf4.slice/crio-conmon-53abf8c46c4806e802330543e52578e77a18490ade1e7a702b54871790c5701b.scope\": RecentStats: unable to find data in memory cache]" Mar 18 10:04:59.346044 master-0 kubenswrapper[8244]: I0318 10:04:59.345994 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_2610d88e-f450-455a-9db5-dc59c1d97bf4/installer/0.log" Mar 18 10:04:59.346044 master-0 kubenswrapper[8244]: I0318 10:04:59.346046 8244 generic.go:334] "Generic (PLEG): container finished" podID="2610d88e-f450-455a-9db5-dc59c1d97bf4" containerID="53abf8c46c4806e802330543e52578e77a18490ade1e7a702b54871790c5701b" exitCode=1 Mar 18 10:04:59.346361 master-0 kubenswrapper[8244]: I0318 10:04:59.346074 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"2610d88e-f450-455a-9db5-dc59c1d97bf4","Type":"ContainerDied","Data":"53abf8c46c4806e802330543e52578e77a18490ade1e7a702b54871790c5701b"} Mar 18 10:04:59.400922 master-0 kubenswrapper[8244]: I0318 10:04:59.400799 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:04:59.400922 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:04:59.400922 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:04:59.400922 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:04:59.401338 master-0 kubenswrapper[8244]: I0318 10:04:59.400955 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:04:59.450817 master-0 kubenswrapper[8244]: I0318 10:04:59.450761 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_2610d88e-f450-455a-9db5-dc59c1d97bf4/installer/0.log" Mar 18 10:04:59.450817 master-0 kubenswrapper[8244]: I0318 10:04:59.450850 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:04:59.550595 master-0 kubenswrapper[8244]: I0318 10:04:59.550486 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-kubelet-dir\") pod \"2610d88e-f450-455a-9db5-dc59c1d97bf4\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " Mar 18 10:04:59.550595 master-0 kubenswrapper[8244]: I0318 10:04:59.550580 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2610d88e-f450-455a-9db5-dc59c1d97bf4" (UID: "2610d88e-f450-455a-9db5-dc59c1d97bf4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:04:59.550595 master-0 kubenswrapper[8244]: I0318 10:04:59.550591 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2610d88e-f450-455a-9db5-dc59c1d97bf4-kube-api-access\") pod \"2610d88e-f450-455a-9db5-dc59c1d97bf4\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " Mar 18 10:04:59.551142 master-0 kubenswrapper[8244]: I0318 10:04:59.550704 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-var-lock\") pod \"2610d88e-f450-455a-9db5-dc59c1d97bf4\" (UID: \"2610d88e-f450-455a-9db5-dc59c1d97bf4\") " Mar 18 10:04:59.551142 master-0 kubenswrapper[8244]: I0318 10:04:59.550912 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:04:59.551142 master-0 kubenswrapper[8244]: I0318 10:04:59.550959 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-var-lock" (OuterVolumeSpecName: "var-lock") pod "2610d88e-f450-455a-9db5-dc59c1d97bf4" (UID: "2610d88e-f450-455a-9db5-dc59c1d97bf4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:04:59.553749 master-0 kubenswrapper[8244]: I0318 10:04:59.553699 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2610d88e-f450-455a-9db5-dc59c1d97bf4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2610d88e-f450-455a-9db5-dc59c1d97bf4" (UID: "2610d88e-f450-455a-9db5-dc59c1d97bf4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:04:59.651759 master-0 kubenswrapper[8244]: I0318 10:04:59.651624 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2610d88e-f450-455a-9db5-dc59c1d97bf4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:04:59.651759 master-0 kubenswrapper[8244]: I0318 10:04:59.651664 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2610d88e-f450-455a-9db5-dc59c1d97bf4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:00.360135 master-0 kubenswrapper[8244]: I0318 10:05:00.360048 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_2610d88e-f450-455a-9db5-dc59c1d97bf4/installer/0.log" Mar 18 10:05:00.360437 master-0 kubenswrapper[8244]: I0318 10:05:00.360148 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"2610d88e-f450-455a-9db5-dc59c1d97bf4","Type":"ContainerDied","Data":"e3402d97d5cd0c562a44a7222ad82fa96c21a20426f7aec38a099e12bb0d5c81"} Mar 18 10:05:00.360437 master-0 kubenswrapper[8244]: I0318 10:05:00.360202 8244 scope.go:117] "RemoveContainer" containerID="53abf8c46c4806e802330543e52578e77a18490ade1e7a702b54871790c5701b" Mar 18 10:05:00.360437 master-0 kubenswrapper[8244]: I0318 10:05:00.360378 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 10:05:00.401669 master-0 kubenswrapper[8244]: I0318 10:05:00.401598 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:00.401669 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:00.401669 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:00.401669 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:00.402212 master-0 kubenswrapper[8244]: I0318 10:05:00.401674 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:01.403950 master-0 kubenswrapper[8244]: I0318 10:05:01.403868 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:01.403950 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:01.403950 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:01.403950 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:01.404797 master-0 kubenswrapper[8244]: I0318 10:05:01.403963 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:02.356073 master-0 kubenswrapper[8244]: E0318 10:05:02.355947 8244 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:05:02.384867 master-0 kubenswrapper[8244]: I0318 10:05:02.384700 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:05:02.384867 master-0 kubenswrapper[8244]: I0318 10:05:02.384791 8244 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="b5440fd92f867438da48c59f39988e512f02a0b7141abc1139ed7de105e95766" exitCode=1 Mar 18 10:05:02.385385 master-0 kubenswrapper[8244]: I0318 10:05:02.384886 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerDied","Data":"b5440fd92f867438da48c59f39988e512f02a0b7141abc1139ed7de105e95766"} Mar 18 10:05:02.385819 master-0 kubenswrapper[8244]: I0318 10:05:02.385765 8244 scope.go:117] "RemoveContainer" containerID="b5440fd92f867438da48c59f39988e512f02a0b7141abc1139ed7de105e95766" Mar 18 10:05:02.402040 master-0 kubenswrapper[8244]: I0318 10:05:02.401984 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:02.402040 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:02.402040 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:02.402040 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:02.402040 master-0 kubenswrapper[8244]: I0318 10:05:02.402051 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:02.812341 master-0 kubenswrapper[8244]: I0318 10:05:02.812272 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:05:02.812928 master-0 kubenswrapper[8244]: I0318 10:05:02.812352 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:05:02.812928 master-0 kubenswrapper[8244]: I0318 10:05:02.812373 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:05:03.398175 master-0 kubenswrapper[8244]: I0318 10:05:03.398127 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:05:03.398540 master-0 kubenswrapper[8244]: I0318 10:05:03.398509 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"fce78d10ab44ad6e3870abc2e19feeb6f5ae7acb96a08b13653663840e0cbb1b"} Mar 18 10:05:03.400533 master-0 kubenswrapper[8244]: I0318 10:05:03.400490 8244 generic.go:334] "Generic (PLEG): container finished" podID="87a8662e-66f1-4aee-9344-564bb4ac4f9a" containerID="9741863ef9844fe110fec368fe8e35a337bceb7feefcd7589421d83a4b33ff81" exitCode=0 Mar 18 10:05:03.400533 master-0 kubenswrapper[8244]: I0318 10:05:03.400528 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"87a8662e-66f1-4aee-9344-564bb4ac4f9a","Type":"ContainerDied","Data":"9741863ef9844fe110fec368fe8e35a337bceb7feefcd7589421d83a4b33ff81"} Mar 18 10:05:03.401093 master-0 kubenswrapper[8244]: I0318 10:05:03.401053 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:03.401093 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:03.401093 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:03.401093 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:03.401387 master-0 kubenswrapper[8244]: I0318 10:05:03.401358 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:04.401593 master-0 kubenswrapper[8244]: I0318 10:05:04.401496 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:04.401593 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:04.401593 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:04.401593 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:04.402726 master-0 kubenswrapper[8244]: I0318 10:05:04.401601 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:04.776733 master-0 kubenswrapper[8244]: I0318 10:05:04.776673 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 10:05:04.833269 master-0 kubenswrapper[8244]: I0318 10:05:04.833194 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kubelet-dir\") pod \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " Mar 18 10:05:04.833487 master-0 kubenswrapper[8244]: I0318 10:05:04.833316 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "87a8662e-66f1-4aee-9344-564bb4ac4f9a" (UID: "87a8662e-66f1-4aee-9344-564bb4ac4f9a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:04.833487 master-0 kubenswrapper[8244]: I0318 10:05:04.833355 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-var-lock\") pod \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " Mar 18 10:05:04.833487 master-0 kubenswrapper[8244]: I0318 10:05:04.833409 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kube-api-access\") pod \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\" (UID: \"87a8662e-66f1-4aee-9344-564bb4ac4f9a\") " Mar 18 10:05:04.833487 master-0 kubenswrapper[8244]: I0318 10:05:04.833433 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-var-lock" (OuterVolumeSpecName: "var-lock") pod "87a8662e-66f1-4aee-9344-564bb4ac4f9a" (UID: "87a8662e-66f1-4aee-9344-564bb4ac4f9a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:04.833774 master-0 kubenswrapper[8244]: I0318 10:05:04.833754 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:04.833880 master-0 kubenswrapper[8244]: I0318 10:05:04.833778 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:04.837570 master-0 kubenswrapper[8244]: I0318 10:05:04.837513 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "87a8662e-66f1-4aee-9344-564bb4ac4f9a" (UID: "87a8662e-66f1-4aee-9344-564bb4ac4f9a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:05:04.934785 master-0 kubenswrapper[8244]: I0318 10:05:04.934698 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a8662e-66f1-4aee-9344-564bb4ac4f9a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:05.402482 master-0 kubenswrapper[8244]: I0318 10:05:05.402414 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:05.402482 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:05.402482 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:05.402482 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:05.403577 master-0 kubenswrapper[8244]: I0318 10:05:05.402503 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:05.420022 master-0 kubenswrapper[8244]: I0318 10:05:05.419933 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"87a8662e-66f1-4aee-9344-564bb4ac4f9a","Type":"ContainerDied","Data":"248be0eef87c6987bd3e5849d27bf7120297d80837bfe7be2b2148ea06921d34"} Mar 18 10:05:05.420022 master-0 kubenswrapper[8244]: I0318 10:05:05.420003 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="248be0eef87c6987bd3e5849d27bf7120297d80837bfe7be2b2148ea06921d34" Mar 18 10:05:05.420289 master-0 kubenswrapper[8244]: I0318 10:05:05.420025 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 10:05:06.402102 master-0 kubenswrapper[8244]: I0318 10:05:06.402030 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:06.402102 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:06.402102 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:06.402102 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:06.402102 master-0 kubenswrapper[8244]: I0318 10:05:06.402115 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:07.401730 master-0 kubenswrapper[8244]: I0318 10:05:07.401635 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:07.401730 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:07.401730 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:07.401730 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:07.402175 master-0 kubenswrapper[8244]: I0318 10:05:07.401743 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:08.401581 master-0 kubenswrapper[8244]: I0318 10:05:08.401487 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:08.401581 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:08.401581 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:08.401581 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:08.402255 master-0 kubenswrapper[8244]: I0318 10:05:08.401636 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:09.402874 master-0 kubenswrapper[8244]: I0318 10:05:09.402762 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:09.402874 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:09.402874 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:09.402874 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:09.404257 master-0 kubenswrapper[8244]: I0318 10:05:09.402937 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:10.401521 master-0 kubenswrapper[8244]: I0318 10:05:10.401409 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:10.401521 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:10.401521 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:10.401521 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:10.401521 master-0 kubenswrapper[8244]: I0318 10:05:10.401506 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:10.462376 master-0 kubenswrapper[8244]: I0318 10:05:10.462318 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-6-master-0_4ea5939e-5f4d-4028-9384-2ec5710ecdc8/installer/0.log" Mar 18 10:05:10.462842 master-0 kubenswrapper[8244]: I0318 10:05:10.462406 8244 generic.go:334] "Generic (PLEG): container finished" podID="4ea5939e-5f4d-4028-9384-2ec5710ecdc8" containerID="ee0f38924448efddd8bd62aa03fafbac2abe2ddc36be4b5eb348dac27bee7be4" exitCode=1 Mar 18 10:05:10.462842 master-0 kubenswrapper[8244]: I0318 10:05:10.462461 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"4ea5939e-5f4d-4028-9384-2ec5710ecdc8","Type":"ContainerDied","Data":"ee0f38924448efddd8bd62aa03fafbac2abe2ddc36be4b5eb348dac27bee7be4"} Mar 18 10:05:11.402814 master-0 kubenswrapper[8244]: I0318 10:05:11.402752 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:11.402814 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:11.402814 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:11.402814 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:11.403415 master-0 kubenswrapper[8244]: I0318 10:05:11.403366 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:11.826258 master-0 kubenswrapper[8244]: I0318 10:05:11.826218 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-6-master-0_4ea5939e-5f4d-4028-9384-2ec5710ecdc8/installer/0.log" Mar 18 10:05:11.826702 master-0 kubenswrapper[8244]: I0318 10:05:11.826298 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:05:11.849102 master-0 kubenswrapper[8244]: I0318 10:05:11.849039 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kubelet-dir\") pod \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " Mar 18 10:05:11.849220 master-0 kubenswrapper[8244]: I0318 10:05:11.849183 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kube-api-access\") pod \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " Mar 18 10:05:11.849220 master-0 kubenswrapper[8244]: I0318 10:05:11.849180 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4ea5939e-5f4d-4028-9384-2ec5710ecdc8" (UID: "4ea5939e-5f4d-4028-9384-2ec5710ecdc8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:11.849282 master-0 kubenswrapper[8244]: I0318 10:05:11.849242 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-var-lock\") pod \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\" (UID: \"4ea5939e-5f4d-4028-9384-2ec5710ecdc8\") " Mar 18 10:05:11.849526 master-0 kubenswrapper[8244]: I0318 10:05:11.849458 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-var-lock" (OuterVolumeSpecName: "var-lock") pod "4ea5939e-5f4d-4028-9384-2ec5710ecdc8" (UID: "4ea5939e-5f4d-4028-9384-2ec5710ecdc8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:11.849606 master-0 kubenswrapper[8244]: I0318 10:05:11.849587 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:11.854174 master-0 kubenswrapper[8244]: I0318 10:05:11.854119 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4ea5939e-5f4d-4028-9384-2ec5710ecdc8" (UID: "4ea5939e-5f4d-4028-9384-2ec5710ecdc8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:05:11.951454 master-0 kubenswrapper[8244]: I0318 10:05:11.951360 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:11.951454 master-0 kubenswrapper[8244]: I0318 10:05:11.951429 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ea5939e-5f4d-4028-9384-2ec5710ecdc8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:12.356617 master-0 kubenswrapper[8244]: E0318 10:05:12.356510 8244 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:05:12.401641 master-0 kubenswrapper[8244]: I0318 10:05:12.401541 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:12.401641 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:12.401641 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:12.401641 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:12.402209 master-0 kubenswrapper[8244]: I0318 10:05:12.401659 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:12.484229 master-0 kubenswrapper[8244]: I0318 10:05:12.484165 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-6-master-0_4ea5939e-5f4d-4028-9384-2ec5710ecdc8/installer/0.log" Mar 18 10:05:12.484513 master-0 kubenswrapper[8244]: I0318 10:05:12.484242 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"4ea5939e-5f4d-4028-9384-2ec5710ecdc8","Type":"ContainerDied","Data":"823fdbbda6c3f662c8a7386983ae9bef843b30223cfc80549bf1fe24201c6148"} Mar 18 10:05:12.484513 master-0 kubenswrapper[8244]: I0318 10:05:12.484270 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823fdbbda6c3f662c8a7386983ae9bef843b30223cfc80549bf1fe24201c6148" Mar 18 10:05:12.484513 master-0 kubenswrapper[8244]: I0318 10:05:12.484354 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:05:12.812862 master-0 kubenswrapper[8244]: I0318 10:05:12.812778 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:05:12.813223 master-0 kubenswrapper[8244]: I0318 10:05:12.813199 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:05:12.819848 master-0 kubenswrapper[8244]: I0318 10:05:12.819768 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:05:13.403057 master-0 kubenswrapper[8244]: I0318 10:05:13.402947 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:13.403057 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:13.403057 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:13.403057 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:13.404338 master-0 kubenswrapper[8244]: I0318 10:05:13.403066 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:13.498432 master-0 kubenswrapper[8244]: I0318 10:05:13.498342 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:05:14.403021 master-0 kubenswrapper[8244]: I0318 10:05:14.402765 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:14.403021 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:14.403021 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:14.403021 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:14.404172 master-0 kubenswrapper[8244]: I0318 10:05:14.403050 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:15.401841 master-0 kubenswrapper[8244]: I0318 10:05:15.401733 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:15.401841 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:15.401841 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:15.401841 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:15.402283 master-0 kubenswrapper[8244]: I0318 10:05:15.401863 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:15.511057 master-0 kubenswrapper[8244]: I0318 10:05:15.510978 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_449dc8b3-72b7-4be5-b5ab-ed4d632f52b2/installer/0.log" Mar 18 10:05:15.511057 master-0 kubenswrapper[8244]: I0318 10:05:15.511032 8244 generic.go:334] "Generic (PLEG): container finished" podID="449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" containerID="01fbb9d5ae86373a51c41f3a5e60d86ed2cd0a315f2ae635082fa660578bf765" exitCode=1 Mar 18 10:05:15.511993 master-0 kubenswrapper[8244]: I0318 10:05:15.511117 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2","Type":"ContainerDied","Data":"01fbb9d5ae86373a51c41f3a5e60d86ed2cd0a315f2ae635082fa660578bf765"} Mar 18 10:05:16.402053 master-0 kubenswrapper[8244]: I0318 10:05:16.401943 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:16.402053 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:16.402053 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:16.402053 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:16.402416 master-0 kubenswrapper[8244]: I0318 10:05:16.402067 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:16.879127 master-0 kubenswrapper[8244]: I0318 10:05:16.879065 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_449dc8b3-72b7-4be5-b5ab-ed4d632f52b2/installer/0.log" Mar 18 10:05:16.879127 master-0 kubenswrapper[8244]: I0318 10:05:16.879142 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:05:17.034707 master-0 kubenswrapper[8244]: I0318 10:05:17.034609 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kubelet-dir\") pod \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " Mar 18 10:05:17.034707 master-0 kubenswrapper[8244]: I0318 10:05:17.034673 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-var-lock\") pod \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " Mar 18 10:05:17.035101 master-0 kubenswrapper[8244]: I0318 10:05:17.034748 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kube-api-access\") pod \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\" (UID: \"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2\") " Mar 18 10:05:17.035101 master-0 kubenswrapper[8244]: I0318 10:05:17.034849 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" (UID: "449dc8b3-72b7-4be5-b5ab-ed4d632f52b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:17.035101 master-0 kubenswrapper[8244]: I0318 10:05:17.034896 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-var-lock" (OuterVolumeSpecName: "var-lock") pod "449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" (UID: "449dc8b3-72b7-4be5-b5ab-ed4d632f52b2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:17.035609 master-0 kubenswrapper[8244]: I0318 10:05:17.035554 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:17.035609 master-0 kubenswrapper[8244]: I0318 10:05:17.035591 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:17.038028 master-0 kubenswrapper[8244]: I0318 10:05:17.037966 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" (UID: "449dc8b3-72b7-4be5-b5ab-ed4d632f52b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:05:17.137018 master-0 kubenswrapper[8244]: I0318 10:05:17.136806 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449dc8b3-72b7-4be5-b5ab-ed4d632f52b2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:17.402009 master-0 kubenswrapper[8244]: I0318 10:05:17.401749 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:17.402009 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:17.402009 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:17.402009 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:17.402009 master-0 kubenswrapper[8244]: I0318 10:05:17.401972 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:17.530474 master-0 kubenswrapper[8244]: I0318 10:05:17.530399 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_449dc8b3-72b7-4be5-b5ab-ed4d632f52b2/installer/0.log" Mar 18 10:05:17.530783 master-0 kubenswrapper[8244]: I0318 10:05:17.530508 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"449dc8b3-72b7-4be5-b5ab-ed4d632f52b2","Type":"ContainerDied","Data":"ab6781799773a4bd269941acef201c1236103b10079655748dd8db69e5953242"} Mar 18 10:05:17.530783 master-0 kubenswrapper[8244]: I0318 10:05:17.530554 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab6781799773a4bd269941acef201c1236103b10079655748dd8db69e5953242" Mar 18 10:05:17.530783 master-0 kubenswrapper[8244]: I0318 10:05:17.530618 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:05:18.402513 master-0 kubenswrapper[8244]: I0318 10:05:18.402443 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:18.402513 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:18.402513 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:18.402513 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:18.403228 master-0 kubenswrapper[8244]: I0318 10:05:18.402542 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:18.468501 master-0 kubenswrapper[8244]: I0318 10:05:18.468464 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 10:05:18.469946 master-0 kubenswrapper[8244]: I0318 10:05:18.469921 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 10:05:18.470924 master-0 kubenswrapper[8244]: I0318 10:05:18.470813 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 10:05:18.471392 master-0 kubenswrapper[8244]: I0318 10:05:18.471375 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 10:05:18.472987 master-0 kubenswrapper[8244]: I0318 10:05:18.472795 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 10:05:18.541287 master-0 kubenswrapper[8244]: I0318 10:05:18.541251 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 10:05:18.542858 master-0 kubenswrapper[8244]: I0318 10:05:18.542817 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 10:05:18.543767 master-0 kubenswrapper[8244]: I0318 10:05:18.543752 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 10:05:18.544789 master-0 kubenswrapper[8244]: I0318 10:05:18.544734 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 10:05:18.546417 master-0 kubenswrapper[8244]: I0318 10:05:18.546365 8244 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2" exitCode=137 Mar 18 10:05:18.546563 master-0 kubenswrapper[8244]: I0318 10:05:18.546524 8244 scope.go:117] "RemoveContainer" containerID="42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf" Mar 18 10:05:18.546673 master-0 kubenswrapper[8244]: I0318 10:05:18.546529 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 10:05:18.546781 master-0 kubenswrapper[8244]: I0318 10:05:18.546545 8244 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb" exitCode=137 Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566510 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566584 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566619 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566675 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566697 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566672 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566701 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566741 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566760 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir" (OuterVolumeSpecName: "log-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566759 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566778 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir" (OuterVolumeSpecName: "data-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:18.566966 master-0 kubenswrapper[8244]: I0318 10:05:18.566866 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:05:18.567877 master-0 kubenswrapper[8244]: I0318 10:05:18.567041 8244 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:18.567877 master-0 kubenswrapper[8244]: I0318 10:05:18.567066 8244 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:18.567877 master-0 kubenswrapper[8244]: I0318 10:05:18.567086 8244 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:18.567877 master-0 kubenswrapper[8244]: I0318 10:05:18.567106 8244 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:18.567877 master-0 kubenswrapper[8244]: I0318 10:05:18.567122 8244 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:18.567877 master-0 kubenswrapper[8244]: I0318 10:05:18.567140 8244 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:05:18.572742 master-0 kubenswrapper[8244]: I0318 10:05:18.572682 8244 scope.go:117] "RemoveContainer" containerID="a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c" Mar 18 10:05:18.600808 master-0 kubenswrapper[8244]: I0318 10:05:18.597739 8244 scope.go:117] "RemoveContainer" containerID="b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac" Mar 18 10:05:18.625801 master-0 kubenswrapper[8244]: I0318 10:05:18.625723 8244 scope.go:117] "RemoveContainer" containerID="8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2" Mar 18 10:05:18.649676 master-0 kubenswrapper[8244]: I0318 10:05:18.649595 8244 scope.go:117] "RemoveContainer" containerID="86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb" Mar 18 10:05:18.670163 master-0 kubenswrapper[8244]: I0318 10:05:18.670077 8244 scope.go:117] "RemoveContainer" containerID="29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37" Mar 18 10:05:18.694117 master-0 kubenswrapper[8244]: I0318 10:05:18.694048 8244 scope.go:117] "RemoveContainer" containerID="c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717" Mar 18 10:05:18.718335 master-0 kubenswrapper[8244]: I0318 10:05:18.718240 8244 scope.go:117] "RemoveContainer" containerID="b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220" Mar 18 10:05:18.745340 master-0 kubenswrapper[8244]: I0318 10:05:18.745275 8244 scope.go:117] "RemoveContainer" containerID="42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf" Mar 18 10:05:18.746327 master-0 kubenswrapper[8244]: E0318 10:05:18.746250 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf\": container with ID starting with 42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf not found: ID does not exist" containerID="42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf" Mar 18 10:05:18.746327 master-0 kubenswrapper[8244]: I0318 10:05:18.746310 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf"} err="failed to get container status \"42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf\": rpc error: code = NotFound desc = could not find container \"42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf\": container with ID starting with 42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf not found: ID does not exist" Mar 18 10:05:18.746555 master-0 kubenswrapper[8244]: I0318 10:05:18.746351 8244 scope.go:117] "RemoveContainer" containerID="a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c" Mar 18 10:05:18.746881 master-0 kubenswrapper[8244]: E0318 10:05:18.746811 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c\": container with ID starting with a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c not found: ID does not exist" containerID="a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c" Mar 18 10:05:18.747004 master-0 kubenswrapper[8244]: I0318 10:05:18.746886 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c"} err="failed to get container status \"a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c\": rpc error: code = NotFound desc = could not find container \"a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c\": container with ID starting with a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c not found: ID does not exist" Mar 18 10:05:18.747004 master-0 kubenswrapper[8244]: I0318 10:05:18.746913 8244 scope.go:117] "RemoveContainer" containerID="b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac" Mar 18 10:05:18.747545 master-0 kubenswrapper[8244]: E0318 10:05:18.747473 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac\": container with ID starting with b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac not found: ID does not exist" containerID="b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac" Mar 18 10:05:18.747685 master-0 kubenswrapper[8244]: I0318 10:05:18.747535 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac"} err="failed to get container status \"b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac\": rpc error: code = NotFound desc = could not find container \"b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac\": container with ID starting with b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac not found: ID does not exist" Mar 18 10:05:18.747685 master-0 kubenswrapper[8244]: I0318 10:05:18.747600 8244 scope.go:117] "RemoveContainer" containerID="8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2" Mar 18 10:05:18.748169 master-0 kubenswrapper[8244]: E0318 10:05:18.748113 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2\": container with ID starting with 8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2 not found: ID does not exist" containerID="8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2" Mar 18 10:05:18.748169 master-0 kubenswrapper[8244]: I0318 10:05:18.748156 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2"} err="failed to get container status \"8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2\": rpc error: code = NotFound desc = could not find container \"8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2\": container with ID starting with 8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2 not found: ID does not exist" Mar 18 10:05:18.748475 master-0 kubenswrapper[8244]: I0318 10:05:18.748182 8244 scope.go:117] "RemoveContainer" containerID="86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb" Mar 18 10:05:18.749076 master-0 kubenswrapper[8244]: E0318 10:05:18.749009 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb\": container with ID starting with 86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb not found: ID does not exist" containerID="86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb" Mar 18 10:05:18.749076 master-0 kubenswrapper[8244]: I0318 10:05:18.749051 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb"} err="failed to get container status \"86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb\": rpc error: code = NotFound desc = could not find container \"86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb\": container with ID starting with 86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb not found: ID does not exist" Mar 18 10:05:18.749076 master-0 kubenswrapper[8244]: I0318 10:05:18.749075 8244 scope.go:117] "RemoveContainer" containerID="29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37" Mar 18 10:05:18.750980 master-0 kubenswrapper[8244]: E0318 10:05:18.750909 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37\": container with ID starting with 29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37 not found: ID does not exist" containerID="29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37" Mar 18 10:05:18.750980 master-0 kubenswrapper[8244]: I0318 10:05:18.750956 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37"} err="failed to get container status \"29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37\": rpc error: code = NotFound desc = could not find container \"29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37\": container with ID starting with 29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37 not found: ID does not exist" Mar 18 10:05:18.750980 master-0 kubenswrapper[8244]: I0318 10:05:18.750985 8244 scope.go:117] "RemoveContainer" containerID="c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717" Mar 18 10:05:18.751545 master-0 kubenswrapper[8244]: E0318 10:05:18.751477 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717\": container with ID starting with c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717 not found: ID does not exist" containerID="c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717" Mar 18 10:05:18.751545 master-0 kubenswrapper[8244]: I0318 10:05:18.751520 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717"} err="failed to get container status \"c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717\": rpc error: code = NotFound desc = could not find container \"c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717\": container with ID starting with c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717 not found: ID does not exist" Mar 18 10:05:18.751545 master-0 kubenswrapper[8244]: I0318 10:05:18.751549 8244 scope.go:117] "RemoveContainer" containerID="b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220" Mar 18 10:05:18.752100 master-0 kubenswrapper[8244]: E0318 10:05:18.752032 8244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220\": container with ID starting with b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220 not found: ID does not exist" containerID="b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220" Mar 18 10:05:18.752100 master-0 kubenswrapper[8244]: I0318 10:05:18.752072 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220"} err="failed to get container status \"b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220\": rpc error: code = NotFound desc = could not find container \"b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220\": container with ID starting with b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220 not found: ID does not exist" Mar 18 10:05:18.752100 master-0 kubenswrapper[8244]: I0318 10:05:18.752098 8244 scope.go:117] "RemoveContainer" containerID="42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf" Mar 18 10:05:18.752643 master-0 kubenswrapper[8244]: I0318 10:05:18.752586 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf"} err="failed to get container status \"42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf\": rpc error: code = NotFound desc = could not find container \"42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf\": container with ID starting with 42a149c50fa78a2d332b3b42781dca70edc3a9ddbb27284d7e5aad00fc9537cf not found: ID does not exist" Mar 18 10:05:18.752643 master-0 kubenswrapper[8244]: I0318 10:05:18.752624 8244 scope.go:117] "RemoveContainer" containerID="a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c" Mar 18 10:05:18.753207 master-0 kubenswrapper[8244]: I0318 10:05:18.753164 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c"} err="failed to get container status \"a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c\": rpc error: code = NotFound desc = could not find container \"a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c\": container with ID starting with a1cc9c7d23364160318dce9d2e1458d63a625f367c87f4f12ac9c93662e4dc3c not found: ID does not exist" Mar 18 10:05:18.753207 master-0 kubenswrapper[8244]: I0318 10:05:18.753200 8244 scope.go:117] "RemoveContainer" containerID="b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac" Mar 18 10:05:18.753621 master-0 kubenswrapper[8244]: I0318 10:05:18.753579 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac"} err="failed to get container status \"b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac\": rpc error: code = NotFound desc = could not find container \"b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac\": container with ID starting with b1623083f7035de6d7b40e1f05f6d5b195a69fc0fe34d1a182012e756cac80ac not found: ID does not exist" Mar 18 10:05:18.753621 master-0 kubenswrapper[8244]: I0318 10:05:18.753619 8244 scope.go:117] "RemoveContainer" containerID="8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2" Mar 18 10:05:18.754012 master-0 kubenswrapper[8244]: I0318 10:05:18.753984 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2"} err="failed to get container status \"8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2\": rpc error: code = NotFound desc = could not find container \"8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2\": container with ID starting with 8499a2d7c7a3d0807ff2dbfdf75fb879a647f69e89c6a7abf9768989efb505e2 not found: ID does not exist" Mar 18 10:05:18.754012 master-0 kubenswrapper[8244]: I0318 10:05:18.754012 8244 scope.go:117] "RemoveContainer" containerID="86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb" Mar 18 10:05:18.754336 master-0 kubenswrapper[8244]: I0318 10:05:18.754296 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb"} err="failed to get container status \"86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb\": rpc error: code = NotFound desc = could not find container \"86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb\": container with ID starting with 86bd68e4c693d142d1db15cea87ea6f7837f758701127f7c99332e838f73bebb not found: ID does not exist" Mar 18 10:05:18.754336 master-0 kubenswrapper[8244]: I0318 10:05:18.754328 8244 scope.go:117] "RemoveContainer" containerID="29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37" Mar 18 10:05:18.755022 master-0 kubenswrapper[8244]: I0318 10:05:18.754958 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37"} err="failed to get container status \"29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37\": rpc error: code = NotFound desc = could not find container \"29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37\": container with ID starting with 29ee184cf90562745faf61ae9cbea267c5bcca4ff035e200a79195304cec3e37 not found: ID does not exist" Mar 18 10:05:18.755022 master-0 kubenswrapper[8244]: I0318 10:05:18.754993 8244 scope.go:117] "RemoveContainer" containerID="c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717" Mar 18 10:05:18.755325 master-0 kubenswrapper[8244]: I0318 10:05:18.755272 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717"} err="failed to get container status \"c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717\": rpc error: code = NotFound desc = could not find container \"c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717\": container with ID starting with c7b162f5cf656f36d03dfa1037f5de6d89cb9f49ed2898c8a6cb0def4574b717 not found: ID does not exist" Mar 18 10:05:18.755325 master-0 kubenswrapper[8244]: I0318 10:05:18.755304 8244 scope.go:117] "RemoveContainer" containerID="b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220" Mar 18 10:05:18.755762 master-0 kubenswrapper[8244]: I0318 10:05:18.755727 8244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220"} err="failed to get container status \"b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220\": rpc error: code = NotFound desc = could not find container \"b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220\": container with ID starting with b649a1b6836488922085b0481b7f8dd7a8d3e925f43a8682136bc4cca08f8220 not found: ID does not exist" Mar 18 10:05:19.401664 master-0 kubenswrapper[8244]: I0318 10:05:19.401586 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:19.401664 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:19.401664 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:19.401664 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:19.402222 master-0 kubenswrapper[8244]: I0318 10:05:19.401687 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:19.747297 master-0 kubenswrapper[8244]: I0318 10:05:19.747137 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b4ed170d527099878cb5fdd508a2fb" path="/var/lib/kubelet/pods/24b4ed170d527099878cb5fdd508a2fb/volumes" Mar 18 10:05:20.402192 master-0 kubenswrapper[8244]: I0318 10:05:20.402106 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:20.402192 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:20.402192 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:20.402192 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:20.402192 master-0 kubenswrapper[8244]: I0318 10:05:20.402197 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:21.402061 master-0 kubenswrapper[8244]: I0318 10:05:21.401965 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:21.402061 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:21.402061 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:21.402061 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:21.402712 master-0 kubenswrapper[8244]: I0318 10:05:21.402084 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:22.337038 master-0 kubenswrapper[8244]: E0318 10:05:22.336804 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189de76e18c3dde4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 10:04:48.30175178 +0000 UTC m=+604.781487938,LastTimestamp:2026-03-18 10:04:48.30175178 +0000 UTC m=+604.781487938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 10:05:22.357050 master-0 kubenswrapper[8244]: E0318 10:05:22.356968 8244 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Mar 18 10:05:22.402354 master-0 kubenswrapper[8244]: I0318 10:05:22.402258 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:22.402354 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:22.402354 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:22.402354 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:22.403369 master-0 kubenswrapper[8244]: I0318 10:05:22.402356 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:23.401590 master-0 kubenswrapper[8244]: I0318 10:05:23.401506 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:23.401590 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:23.401590 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:23.401590 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:23.402034 master-0 kubenswrapper[8244]: I0318 10:05:23.401606 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:24.401979 master-0 kubenswrapper[8244]: I0318 10:05:24.401873 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:24.401979 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:24.401979 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:24.401979 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:24.401979 master-0 kubenswrapper[8244]: I0318 10:05:24.401977 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:25.401860 master-0 kubenswrapper[8244]: I0318 10:05:25.401738 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:25.401860 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:25.401860 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:25.401860 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:25.401860 master-0 kubenswrapper[8244]: I0318 10:05:25.401809 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:26.401877 master-0 kubenswrapper[8244]: I0318 10:05:26.401776 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:26.401877 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:26.401877 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:26.401877 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:26.402675 master-0 kubenswrapper[8244]: I0318 10:05:26.401928 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:27.401900 master-0 kubenswrapper[8244]: I0318 10:05:27.401771 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:27.401900 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:27.401900 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:27.401900 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:27.401900 master-0 kubenswrapper[8244]: I0318 10:05:27.401856 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:27.732902 master-0 kubenswrapper[8244]: I0318 10:05:27.732489 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 10:05:27.760321 master-0 kubenswrapper[8244]: I0318 10:05:27.760209 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:05:27.760321 master-0 kubenswrapper[8244]: I0318 10:05:27.760270 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:05:28.401426 master-0 kubenswrapper[8244]: I0318 10:05:28.401363 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:28.401426 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:28.401426 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:28.401426 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:28.401426 master-0 kubenswrapper[8244]: I0318 10:05:28.401421 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:29.401930 master-0 kubenswrapper[8244]: I0318 10:05:29.401804 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:29.401930 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:29.401930 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:29.401930 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:29.402870 master-0 kubenswrapper[8244]: I0318 10:05:29.401935 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:30.401214 master-0 kubenswrapper[8244]: I0318 10:05:30.401159 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:30.401214 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:30.401214 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:30.401214 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:30.401854 master-0 kubenswrapper[8244]: I0318 10:05:30.401683 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:31.401681 master-0 kubenswrapper[8244]: I0318 10:05:31.401462 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:31.401681 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:31.401681 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:31.401681 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:31.401681 master-0 kubenswrapper[8244]: I0318 10:05:31.401597 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:32.358367 master-0 kubenswrapper[8244]: E0318 10:05:32.358263 8244 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:05:32.402346 master-0 kubenswrapper[8244]: I0318 10:05:32.402266 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:32.402346 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:32.402346 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:32.402346 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:32.403263 master-0 kubenswrapper[8244]: I0318 10:05:32.402355 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:33.402163 master-0 kubenswrapper[8244]: I0318 10:05:33.402054 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:33.402163 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:33.402163 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:33.402163 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:33.403386 master-0 kubenswrapper[8244]: I0318 10:05:33.402196 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:34.401452 master-0 kubenswrapper[8244]: I0318 10:05:34.401384 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:34.401452 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:34.401452 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:34.401452 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:34.402030 master-0 kubenswrapper[8244]: I0318 10:05:34.401988 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:35.402865 master-0 kubenswrapper[8244]: I0318 10:05:35.402721 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:35.402865 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:35.402865 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:35.402865 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:35.402865 master-0 kubenswrapper[8244]: I0318 10:05:35.402814 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:36.402495 master-0 kubenswrapper[8244]: I0318 10:05:36.401619 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:36.402495 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:36.402495 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:36.402495 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:36.402495 master-0 kubenswrapper[8244]: I0318 10:05:36.401685 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:37.402443 master-0 kubenswrapper[8244]: I0318 10:05:37.402346 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:37.402443 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:37.402443 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:37.402443 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:37.403461 master-0 kubenswrapper[8244]: I0318 10:05:37.402459 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:38.402553 master-0 kubenswrapper[8244]: I0318 10:05:38.402468 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:38.402553 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:38.402553 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:38.402553 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:38.402553 master-0 kubenswrapper[8244]: I0318 10:05:38.402548 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:39.402421 master-0 kubenswrapper[8244]: I0318 10:05:39.402325 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:39.402421 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:39.402421 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:39.402421 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:39.403211 master-0 kubenswrapper[8244]: I0318 10:05:39.402429 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:39.715689 master-0 kubenswrapper[8244]: I0318 10:05:39.715615 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7fl4x_bb942756-bac7-414d-b179-cebdce588a13/approver/1.log" Mar 18 10:05:39.716633 master-0 kubenswrapper[8244]: I0318 10:05:39.716551 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7fl4x_bb942756-bac7-414d-b179-cebdce588a13/approver/0.log" Mar 18 10:05:39.717139 master-0 kubenswrapper[8244]: I0318 10:05:39.717075 8244 generic.go:334] "Generic (PLEG): container finished" podID="bb942756-bac7-414d-b179-cebdce588a13" containerID="8009f4f9bf68efb70bfa7b66731f5e2be25cbb5d97d4aeafc6a4a27c0d88d49e" exitCode=1 Mar 18 10:05:39.717240 master-0 kubenswrapper[8244]: I0318 10:05:39.717143 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7fl4x" event={"ID":"bb942756-bac7-414d-b179-cebdce588a13","Type":"ContainerDied","Data":"8009f4f9bf68efb70bfa7b66731f5e2be25cbb5d97d4aeafc6a4a27c0d88d49e"} Mar 18 10:05:39.717240 master-0 kubenswrapper[8244]: I0318 10:05:39.717195 8244 scope.go:117] "RemoveContainer" containerID="11b5b6c3c569b883f4e3bfd269fb3345429d4cace9fc05301ab08ee60a18aa95" Mar 18 10:05:39.717934 master-0 kubenswrapper[8244]: I0318 10:05:39.717865 8244 scope.go:117] "RemoveContainer" containerID="8009f4f9bf68efb70bfa7b66731f5e2be25cbb5d97d4aeafc6a4a27c0d88d49e" Mar 18 10:05:39.718322 master-0 kubenswrapper[8244]: E0318 10:05:39.718242 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-7fl4x_openshift-network-node-identity(bb942756-bac7-414d-b179-cebdce588a13)\"" pod="openshift-network-node-identity/network-node-identity-7fl4x" podUID="bb942756-bac7-414d-b179-cebdce588a13" Mar 18 10:05:40.402179 master-0 kubenswrapper[8244]: I0318 10:05:40.402072 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:40.402179 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:40.402179 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:40.402179 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:40.403507 master-0 kubenswrapper[8244]: I0318 10:05:40.402177 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:40.727908 master-0 kubenswrapper[8244]: I0318 10:05:40.727679 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7fl4x_bb942756-bac7-414d-b179-cebdce588a13/approver/1.log" Mar 18 10:05:41.402605 master-0 kubenswrapper[8244]: I0318 10:05:41.402521 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:41.402605 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:41.402605 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:41.402605 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:41.403813 master-0 kubenswrapper[8244]: I0318 10:05:41.402609 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:42.359072 master-0 kubenswrapper[8244]: E0318 10:05:42.358978 8244 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:05:42.359072 master-0 kubenswrapper[8244]: I0318 10:05:42.359061 8244 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 10:05:42.402255 master-0 kubenswrapper[8244]: I0318 10:05:42.402188 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:42.402255 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:42.402255 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:42.402255 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:42.402572 master-0 kubenswrapper[8244]: I0318 10:05:42.402264 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:43.403034 master-0 kubenswrapper[8244]: I0318 10:05:43.402952 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:43.403034 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:43.403034 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:43.403034 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:43.403697 master-0 kubenswrapper[8244]: I0318 10:05:43.403052 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:44.402052 master-0 kubenswrapper[8244]: I0318 10:05:44.401897 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:44.402052 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:44.402052 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:44.402052 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:44.402052 master-0 kubenswrapper[8244]: I0318 10:05:44.402038 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:45.400975 master-0 kubenswrapper[8244]: I0318 10:05:45.400897 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:45.400975 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:45.400975 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:45.400975 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:45.401664 master-0 kubenswrapper[8244]: I0318 10:05:45.400989 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:46.402643 master-0 kubenswrapper[8244]: I0318 10:05:46.402562 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:46.402643 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:46.402643 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:46.402643 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:46.403545 master-0 kubenswrapper[8244]: I0318 10:05:46.402653 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:47.401546 master-0 kubenswrapper[8244]: I0318 10:05:47.401419 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:47.401546 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:47.401546 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:47.401546 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:47.401546 master-0 kubenswrapper[8244]: I0318 10:05:47.401544 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:48.401776 master-0 kubenswrapper[8244]: I0318 10:05:48.401685 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:48.401776 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:48.401776 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:48.401776 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:48.402481 master-0 kubenswrapper[8244]: I0318 10:05:48.401782 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:49.402545 master-0 kubenswrapper[8244]: I0318 10:05:49.402434 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:49.402545 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:49.402545 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:49.402545 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:49.402545 master-0 kubenswrapper[8244]: I0318 10:05:49.402547 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:50.401819 master-0 kubenswrapper[8244]: I0318 10:05:50.401703 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:50.401819 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:50.401819 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:50.401819 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:50.401819 master-0 kubenswrapper[8244]: I0318 10:05:50.401790 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:51.401094 master-0 kubenswrapper[8244]: I0318 10:05:51.401005 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:51.401094 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:51.401094 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:51.401094 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:51.401412 master-0 kubenswrapper[8244]: I0318 10:05:51.401107 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:52.360205 master-0 kubenswrapper[8244]: E0318 10:05:52.360095 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 18 10:05:52.401868 master-0 kubenswrapper[8244]: I0318 10:05:52.401764 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:52.401868 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:52.401868 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:52.401868 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:52.401868 master-0 kubenswrapper[8244]: I0318 10:05:52.401845 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:53.401608 master-0 kubenswrapper[8244]: I0318 10:05:53.401552 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:53.401608 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:53.401608 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:53.401608 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:53.402565 master-0 kubenswrapper[8244]: I0318 10:05:53.402528 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:53.735084 master-0 kubenswrapper[8244]: I0318 10:05:53.734938 8244 scope.go:117] "RemoveContainer" containerID="8009f4f9bf68efb70bfa7b66731f5e2be25cbb5d97d4aeafc6a4a27c0d88d49e" Mar 18 10:05:54.402058 master-0 kubenswrapper[8244]: I0318 10:05:54.401980 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:54.402058 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:54.402058 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:54.402058 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:54.403170 master-0 kubenswrapper[8244]: I0318 10:05:54.402977 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:54.843843 master-0 kubenswrapper[8244]: I0318 10:05:54.843732 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7fl4x_bb942756-bac7-414d-b179-cebdce588a13/approver/1.log" Mar 18 10:05:54.845176 master-0 kubenswrapper[8244]: I0318 10:05:54.845100 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7fl4x" event={"ID":"bb942756-bac7-414d-b179-cebdce588a13","Type":"ContainerStarted","Data":"deba1ae9b701f3dda32dc46c957a1aa5ded58112df46805b484d58030fe3f3c1"} Mar 18 10:05:55.401639 master-0 kubenswrapper[8244]: I0318 10:05:55.401579 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:55.401639 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:55.401639 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:55.401639 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:55.401925 master-0 kubenswrapper[8244]: I0318 10:05:55.401646 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:56.344005 master-0 kubenswrapper[8244]: E0318 10:05:56.343768 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189de75edb7e5989 openshift-kube-controller-manager 12848 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:af8e875368eec13e995ea08015e08c42,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 10:03:42 +0000 UTC,LastTimestamp:2026-03-18 10:05:02.387702157 +0000 UTC m=+618.867438295,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 10:05:56.401719 master-0 kubenswrapper[8244]: I0318 10:05:56.401643 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:56.401719 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:56.401719 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:56.401719 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:56.401719 master-0 kubenswrapper[8244]: I0318 10:05:56.401716 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:56.462321 master-0 kubenswrapper[8244]: E0318 10:05:56.462202 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:05:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:05:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:05:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:05:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:05:57.401501 master-0 kubenswrapper[8244]: I0318 10:05:57.401408 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:57.401501 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:57.401501 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:57.401501 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:57.402277 master-0 kubenswrapper[8244]: I0318 10:05:57.401560 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:58.400971 master-0 kubenswrapper[8244]: I0318 10:05:58.400810 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:05:58.400971 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:05:58.400971 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:05:58.400971 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:05:58.401347 master-0 kubenswrapper[8244]: I0318 10:05:58.400987 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:05:58.401347 master-0 kubenswrapper[8244]: I0318 10:05:58.401037 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:05:58.401650 master-0 kubenswrapper[8244]: I0318 10:05:58.401613 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"49d021e4bb5a3483651e863b5f33517771b81ab9615ea08cc7bd4cae097b1d2d"} pod="openshift-ingress/router-default-7dcf5569b5-82tbk" containerMessage="Container router failed startup probe, will be restarted" Mar 18 10:05:58.402220 master-0 kubenswrapper[8244]: I0318 10:05:58.401670 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" containerID="cri-o://49d021e4bb5a3483651e863b5f33517771b81ab9615ea08cc7bd4cae097b1d2d" gracePeriod=3600 Mar 18 10:05:59.453058 master-0 kubenswrapper[8244]: I0318 10:05:59.452964 8244 status_manager.go:851] "Failed to get status for pod" podUID="2610d88e-f450-455a-9db5-dc59c1d97bf4" pod="openshift-kube-apiserver/installer-3-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-3-master-0)" Mar 18 10:06:01.763069 master-0 kubenswrapper[8244]: E0318 10:06:01.763000 8244 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 10:06:01.764659 master-0 kubenswrapper[8244]: I0318 10:06:01.764593 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 10:06:01.903222 master-0 kubenswrapper[8244]: I0318 10:06:01.903117 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"9dc4baf2ee903f66ceacf214f401bab7bc4c01b6dec665d83f3584b31ae00f41"} Mar 18 10:06:02.561384 master-0 kubenswrapper[8244]: E0318 10:06:02.561278 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 10:06:02.914634 master-0 kubenswrapper[8244]: I0318 10:06:02.914473 8244 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="ec0a4a4a27c5788cf435e3f981e3abe7cd525b4f9b545a25440129af48eb261e" exitCode=0 Mar 18 10:06:02.914634 master-0 kubenswrapper[8244]: I0318 10:06:02.914550 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"ec0a4a4a27c5788cf435e3f981e3abe7cd525b4f9b545a25440129af48eb261e"} Mar 18 10:06:02.915453 master-0 kubenswrapper[8244]: I0318 10:06:02.914992 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:06:02.915453 master-0 kubenswrapper[8244]: I0318 10:06:02.915028 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:06:06.462765 master-0 kubenswrapper[8244]: E0318 10:06:06.462656 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:06:06.950796 master-0 kubenswrapper[8244]: I0318 10:06:06.950706 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_90db95c5-2017-4b04-b11c-9844947c5be9/installer/0.log" Mar 18 10:06:06.951266 master-0 kubenswrapper[8244]: I0318 10:06:06.950794 8244 generic.go:334] "Generic (PLEG): container finished" podID="90db95c5-2017-4b04-b11c-9844947c5be9" containerID="84fe69ce9654e0f778c53fad94cc55da3a405c4d3f78319e40a6e7f4b1d02966" exitCode=1 Mar 18 10:06:06.951266 master-0 kubenswrapper[8244]: I0318 10:06:06.950892 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"90db95c5-2017-4b04-b11c-9844947c5be9","Type":"ContainerDied","Data":"84fe69ce9654e0f778c53fad94cc55da3a405c4d3f78319e40a6e7f4b1d02966"} Mar 18 10:06:08.301018 master-0 kubenswrapper[8244]: I0318 10:06:08.300912 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_90db95c5-2017-4b04-b11c-9844947c5be9/installer/0.log" Mar 18 10:06:08.301600 master-0 kubenswrapper[8244]: I0318 10:06:08.301034 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:06:08.459618 master-0 kubenswrapper[8244]: I0318 10:06:08.459493 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db95c5-2017-4b04-b11c-9844947c5be9-kube-api-access\") pod \"90db95c5-2017-4b04-b11c-9844947c5be9\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " Mar 18 10:06:08.459618 master-0 kubenswrapper[8244]: I0318 10:06:08.459575 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-kubelet-dir\") pod \"90db95c5-2017-4b04-b11c-9844947c5be9\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " Mar 18 10:06:08.460143 master-0 kubenswrapper[8244]: I0318 10:06:08.459679 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-var-lock\") pod \"90db95c5-2017-4b04-b11c-9844947c5be9\" (UID: \"90db95c5-2017-4b04-b11c-9844947c5be9\") " Mar 18 10:06:08.460143 master-0 kubenswrapper[8244]: I0318 10:06:08.459753 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-var-lock" (OuterVolumeSpecName: "var-lock") pod "90db95c5-2017-4b04-b11c-9844947c5be9" (UID: "90db95c5-2017-4b04-b11c-9844947c5be9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:06:08.460143 master-0 kubenswrapper[8244]: I0318 10:06:08.459817 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "90db95c5-2017-4b04-b11c-9844947c5be9" (UID: "90db95c5-2017-4b04-b11c-9844947c5be9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:06:08.460422 master-0 kubenswrapper[8244]: I0318 10:06:08.460219 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:06:08.460422 master-0 kubenswrapper[8244]: I0318 10:06:08.460237 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/90db95c5-2017-4b04-b11c-9844947c5be9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:06:08.465683 master-0 kubenswrapper[8244]: I0318 10:06:08.465592 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90db95c5-2017-4b04-b11c-9844947c5be9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "90db95c5-2017-4b04-b11c-9844947c5be9" (UID: "90db95c5-2017-4b04-b11c-9844947c5be9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:06:08.562134 master-0 kubenswrapper[8244]: I0318 10:06:08.562063 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90db95c5-2017-4b04-b11c-9844947c5be9-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:06:08.971760 master-0 kubenswrapper[8244]: I0318 10:06:08.971561 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_90db95c5-2017-4b04-b11c-9844947c5be9/installer/0.log" Mar 18 10:06:08.971760 master-0 kubenswrapper[8244]: I0318 10:06:08.971710 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"90db95c5-2017-4b04-b11c-9844947c5be9","Type":"ContainerDied","Data":"2b33ec4b21a843e83059f3a27a8bc8244c587a53368b1233d2c8ea0115ce547d"} Mar 18 10:06:08.972197 master-0 kubenswrapper[8244]: I0318 10:06:08.971773 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b33ec4b21a843e83059f3a27a8bc8244c587a53368b1233d2c8ea0115ce547d" Mar 18 10:06:08.972197 master-0 kubenswrapper[8244]: I0318 10:06:08.971813 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:06:12.962779 master-0 kubenswrapper[8244]: E0318 10:06:12.962677 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 18 10:06:16.463069 master-0 kubenswrapper[8244]: E0318 10:06:16.462905 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 18 10:06:23.765027 master-0 kubenswrapper[8244]: E0318 10:06:23.764269 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 10:06:26.463900 master-0 kubenswrapper[8244]: E0318 10:06:26.463701 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:06:30.347452 master-0 kubenswrapper[8244]: E0318 10:06:30.347221 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189de75ee9e5a110 openshift-kube-controller-manager 12849 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:af8e875368eec13e995ea08015e08c42,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 10:03:43 +0000 UTC,LastTimestamp:2026-03-18 10:05:02.693206516 +0000 UTC m=+619.172942644,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 10:06:31.143548 master-0 kubenswrapper[8244]: I0318 10:06:31.143452 8244 generic.go:334] "Generic (PLEG): container finished" podID="6f266bad-8b30-4300-ad93-9d48e61f2440" containerID="fb1e06109c9333d787d8e6b957a55759794e573da59639d9f2a8746b35212fab" exitCode=0 Mar 18 10:06:31.143548 master-0 kubenswrapper[8244]: I0318 10:06:31.143506 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" event={"ID":"6f266bad-8b30-4300-ad93-9d48e61f2440","Type":"ContainerDied","Data":"fb1e06109c9333d787d8e6b957a55759794e573da59639d9f2a8746b35212fab"} Mar 18 10:06:31.143972 master-0 kubenswrapper[8244]: I0318 10:06:31.143921 8244 scope.go:117] "RemoveContainer" containerID="fb1e06109c9333d787d8e6b957a55759794e573da59639d9f2a8746b35212fab" Mar 18 10:06:32.157542 master-0 kubenswrapper[8244]: I0318 10:06:32.157393 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" event={"ID":"6f266bad-8b30-4300-ad93-9d48e61f2440","Type":"ContainerStarted","Data":"d80f6f0ae43ab2528efc8c923f1718e0e359140add9567bb809450b8e98e5039"} Mar 18 10:06:32.158464 master-0 kubenswrapper[8244]: I0318 10:06:32.157906 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:06:32.160281 master-0 kubenswrapper[8244]: I0318 10:06:32.160240 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:06:35.365811 master-0 kubenswrapper[8244]: E0318 10:06:35.365706 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 10:06:36.468006 master-0 kubenswrapper[8244]: E0318 10:06:36.464672 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:06:36.469332 master-0 kubenswrapper[8244]: E0318 10:06:36.469297 8244 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 10:06:36.917985 master-0 kubenswrapper[8244]: E0318 10:06:36.917878 8244 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 10:06:37.199625 master-0 kubenswrapper[8244]: I0318 10:06:37.199570 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-77n8q_b6948f93-b573-4f09-b754-aaa2269e2875/manager/0.log" Mar 18 10:06:37.199857 master-0 kubenswrapper[8244]: I0318 10:06:37.199642 8244 generic.go:334] "Generic (PLEG): container finished" podID="b6948f93-b573-4f09-b754-aaa2269e2875" containerID="7a73a7304ad52748de231e8de0dd60f0f62a95ba031328669ed0ac946a01de35" exitCode=1 Mar 18 10:06:37.199857 master-0 kubenswrapper[8244]: I0318 10:06:37.199721 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" event={"ID":"b6948f93-b573-4f09-b754-aaa2269e2875","Type":"ContainerDied","Data":"7a73a7304ad52748de231e8de0dd60f0f62a95ba031328669ed0ac946a01de35"} Mar 18 10:06:37.200329 master-0 kubenswrapper[8244]: I0318 10:06:37.200279 8244 scope.go:117] "RemoveContainer" containerID="7a73a7304ad52748de231e8de0dd60f0f62a95ba031328669ed0ac946a01de35" Mar 18 10:06:37.201927 master-0 kubenswrapper[8244]: I0318 10:06:37.201887 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/0.log" Mar 18 10:06:37.202125 master-0 kubenswrapper[8244]: I0318 10:06:37.201940 8244 generic.go:334] "Generic (PLEG): container finished" podID="932a70df-3afe-4873-9449-ab6e061d3fe3" containerID="17c5a6d0d57e33e7edf72cf60a77174890881333b1c35130459a5598516f267c" exitCode=1 Mar 18 10:06:37.202125 master-0 kubenswrapper[8244]: I0318 10:06:37.201984 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" event={"ID":"932a70df-3afe-4873-9449-ab6e061d3fe3","Type":"ContainerDied","Data":"17c5a6d0d57e33e7edf72cf60a77174890881333b1c35130459a5598516f267c"} Mar 18 10:06:37.202749 master-0 kubenswrapper[8244]: I0318 10:06:37.202706 8244 scope.go:117] "RemoveContainer" containerID="17c5a6d0d57e33e7edf72cf60a77174890881333b1c35130459a5598516f267c" Mar 18 10:06:37.205126 master-0 kubenswrapper[8244]: I0318 10:06:37.205090 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-nq7mw_0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/manager/0.log" Mar 18 10:06:37.205667 master-0 kubenswrapper[8244]: I0318 10:06:37.205625 8244 generic.go:334] "Generic (PLEG): container finished" podID="0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a" containerID="89f9d8c31d719734af3431b3cec84aa03bf298440dd062c3328c469e4d1b49bb" exitCode=1 Mar 18 10:06:37.205667 master-0 kubenswrapper[8244]: I0318 10:06:37.205659 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" event={"ID":"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a","Type":"ContainerDied","Data":"89f9d8c31d719734af3431b3cec84aa03bf298440dd062c3328c469e4d1b49bb"} Mar 18 10:06:37.206116 master-0 kubenswrapper[8244]: I0318 10:06:37.206078 8244 scope.go:117] "RemoveContainer" containerID="89f9d8c31d719734af3431b3cec84aa03bf298440dd062c3328c469e4d1b49bb" Mar 18 10:06:37.381857 master-0 kubenswrapper[8244]: I0318 10:06:37.381782 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:06:37.381857 master-0 kubenswrapper[8244]: I0318 10:06:37.381849 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:06:38.214979 master-0 kubenswrapper[8244]: I0318 10:06:38.214911 8244 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="c51a160bfa16a28b74f81d311f303e209d7ed9b37be27ca1db9e534e7071f1af" exitCode=0 Mar 18 10:06:38.215851 master-0 kubenswrapper[8244]: I0318 10:06:38.215002 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"c51a160bfa16a28b74f81d311f303e209d7ed9b37be27ca1db9e534e7071f1af"} Mar 18 10:06:38.215851 master-0 kubenswrapper[8244]: I0318 10:06:38.215434 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:06:38.215851 master-0 kubenswrapper[8244]: I0318 10:06:38.215458 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:06:38.220009 master-0 kubenswrapper[8244]: I0318 10:06:38.219940 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-77n8q_b6948f93-b573-4f09-b754-aaa2269e2875/manager/0.log" Mar 18 10:06:38.220146 master-0 kubenswrapper[8244]: I0318 10:06:38.220113 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" event={"ID":"b6948f93-b573-4f09-b754-aaa2269e2875","Type":"ContainerStarted","Data":"442081490d9465b6058d061666f6e05668388daf26c81d299f3fe1734afa0e04"} Mar 18 10:06:38.220613 master-0 kubenswrapper[8244]: I0318 10:06:38.220561 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:06:38.222688 master-0 kubenswrapper[8244]: I0318 10:06:38.222653 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-nq7mw_0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/manager/0.log" Mar 18 10:06:38.223234 master-0 kubenswrapper[8244]: I0318 10:06:38.223188 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" event={"ID":"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a","Type":"ContainerStarted","Data":"a388daaac0a3cd7e521e13d6c310f2762c0e10179308f61fd17b72f7cd087cd4"} Mar 18 10:06:38.223495 master-0 kubenswrapper[8244]: I0318 10:06:38.223462 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:06:38.225344 master-0 kubenswrapper[8244]: I0318 10:06:38.225305 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/0.log" Mar 18 10:06:38.225344 master-0 kubenswrapper[8244]: I0318 10:06:38.225343 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" event={"ID":"932a70df-3afe-4873-9449-ab6e061d3fe3","Type":"ContainerStarted","Data":"0781313e8cf2b20835b28fe776f7c1e4a2d3726fbdb7ce76e53c1492ed63a933"} Mar 18 10:06:39.236118 master-0 kubenswrapper[8244]: I0318 10:06:39.236026 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/cluster-cloud-controller-manager/0.log" Mar 18 10:06:39.236118 master-0 kubenswrapper[8244]: I0318 10:06:39.236095 8244 generic.go:334] "Generic (PLEG): container finished" podID="8641c1d1-dd79-4f1f-9343-52d1ee6faf9f" containerID="592ca06fab8bb0c93dfd3465f07a7c645bf00008deb42f76b6d5198afd1f495a" exitCode=1 Mar 18 10:06:39.237110 master-0 kubenswrapper[8244]: I0318 10:06:39.236191 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" event={"ID":"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f","Type":"ContainerDied","Data":"592ca06fab8bb0c93dfd3465f07a7c645bf00008deb42f76b6d5198afd1f495a"} Mar 18 10:06:39.237110 master-0 kubenswrapper[8244]: I0318 10:06:39.237015 8244 scope.go:117] "RemoveContainer" containerID="592ca06fab8bb0c93dfd3465f07a7c645bf00008deb42f76b6d5198afd1f495a" Mar 18 10:06:40.250107 master-0 kubenswrapper[8244]: I0318 10:06:40.250041 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/cluster-cloud-controller-manager/0.log" Mar 18 10:06:40.250769 master-0 kubenswrapper[8244]: I0318 10:06:40.250125 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" event={"ID":"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f","Type":"ContainerStarted","Data":"2542c3ab07837361a95f820091cdaf2c668f117560db5e0a11b7f8f9c87d0f7c"} Mar 18 10:06:45.298182 master-0 kubenswrapper[8244]: I0318 10:06:45.298102 8244 generic.go:334] "Generic (PLEG): container finished" podID="43d54514-989c-4c82-93f9-153b44eacdd1" containerID="49d021e4bb5a3483651e863b5f33517771b81ab9615ea08cc7bd4cae097b1d2d" exitCode=0 Mar 18 10:06:45.298182 master-0 kubenswrapper[8244]: I0318 10:06:45.298153 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerDied","Data":"49d021e4bb5a3483651e863b5f33517771b81ab9615ea08cc7bd4cae097b1d2d"} Mar 18 10:06:45.298182 master-0 kubenswrapper[8244]: I0318 10:06:45.298181 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerStarted","Data":"027c606848ee1832749ed6e321be439a9482e3f79b6245a43fee2d25af9358b6"} Mar 18 10:06:45.298182 master-0 kubenswrapper[8244]: I0318 10:06:45.298198 8244 scope.go:117] "RemoveContainer" containerID="0056d6e24bcc6dc57e3453a9e7f141adeb078909a14a7b6029f52e100df60161" Mar 18 10:06:45.399459 master-0 kubenswrapper[8244]: I0318 10:06:45.399322 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:06:45.403362 master-0 kubenswrapper[8244]: I0318 10:06:45.403295 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:45.403362 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:45.403362 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:45.403362 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:45.403531 master-0 kubenswrapper[8244]: I0318 10:06:45.403409 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:46.017428 master-0 kubenswrapper[8244]: I0318 10:06:46.017279 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:06:46.401384 master-0 kubenswrapper[8244]: I0318 10:06:46.401263 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:46.401384 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:46.401384 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:46.401384 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:46.401384 master-0 kubenswrapper[8244]: I0318 10:06:46.401325 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:47.323545 master-0 kubenswrapper[8244]: I0318 10:06:47.323412 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/config-sync-controllers/0.log" Mar 18 10:06:47.324486 master-0 kubenswrapper[8244]: I0318 10:06:47.324417 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/cluster-cloud-controller-manager/0.log" Mar 18 10:06:47.324663 master-0 kubenswrapper[8244]: I0318 10:06:47.324517 8244 generic.go:334] "Generic (PLEG): container finished" podID="8641c1d1-dd79-4f1f-9343-52d1ee6faf9f" containerID="c9db2465522a9f31bfdb29b4350bcd424f2fa2f288ceeee292a0e5256f8ed40d" exitCode=1 Mar 18 10:06:47.324767 master-0 kubenswrapper[8244]: I0318 10:06:47.324656 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" event={"ID":"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f","Type":"ContainerDied","Data":"c9db2465522a9f31bfdb29b4350bcd424f2fa2f288ceeee292a0e5256f8ed40d"} Mar 18 10:06:47.325936 master-0 kubenswrapper[8244]: I0318 10:06:47.325877 8244 scope.go:117] "RemoveContainer" containerID="c9db2465522a9f31bfdb29b4350bcd424f2fa2f288ceeee292a0e5256f8ed40d" Mar 18 10:06:47.385472 master-0 kubenswrapper[8244]: I0318 10:06:47.385358 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:06:47.402178 master-0 kubenswrapper[8244]: I0318 10:06:47.402115 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:47.402178 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:47.402178 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:47.402178 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:47.402178 master-0 kubenswrapper[8244]: I0318 10:06:47.402181 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:48.333497 master-0 kubenswrapper[8244]: I0318 10:06:48.333409 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/config-sync-controllers/0.log" Mar 18 10:06:48.334157 master-0 kubenswrapper[8244]: I0318 10:06:48.334073 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/cluster-cloud-controller-manager/0.log" Mar 18 10:06:48.334289 master-0 kubenswrapper[8244]: I0318 10:06:48.334155 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" event={"ID":"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f","Type":"ContainerStarted","Data":"305fc56c03b6cd5aea16860c0fb31104a106ea871ec097e127bf652e297aac9a"} Mar 18 10:06:48.401977 master-0 kubenswrapper[8244]: I0318 10:06:48.401897 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:48.401977 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:48.401977 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:48.401977 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:48.402679 master-0 kubenswrapper[8244]: I0318 10:06:48.401999 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:48.567312 master-0 kubenswrapper[8244]: E0318 10:06:48.567163 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="6.4s" Mar 18 10:06:49.402097 master-0 kubenswrapper[8244]: I0318 10:06:49.402029 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:49.402097 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:49.402097 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:49.402097 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:49.402820 master-0 kubenswrapper[8244]: I0318 10:06:49.402105 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:50.402145 master-0 kubenswrapper[8244]: I0318 10:06:50.402060 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:50.402145 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:50.402145 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:50.402145 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:50.403409 master-0 kubenswrapper[8244]: I0318 10:06:50.402165 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:51.401593 master-0 kubenswrapper[8244]: I0318 10:06:51.401507 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:51.401593 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:51.401593 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:51.401593 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:51.401944 master-0 kubenswrapper[8244]: I0318 10:06:51.401617 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:52.401019 master-0 kubenswrapper[8244]: I0318 10:06:52.400916 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:52.401019 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:52.401019 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:52.401019 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:52.401019 master-0 kubenswrapper[8244]: I0318 10:06:52.401018 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:53.399127 master-0 kubenswrapper[8244]: I0318 10:06:53.399048 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:06:53.401475 master-0 kubenswrapper[8244]: I0318 10:06:53.401419 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:53.401475 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:53.401475 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:53.401475 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:53.402113 master-0 kubenswrapper[8244]: I0318 10:06:53.401477 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:54.401399 master-0 kubenswrapper[8244]: I0318 10:06:54.401294 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:54.401399 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:54.401399 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:54.401399 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:54.402547 master-0 kubenswrapper[8244]: I0318 10:06:54.401444 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:55.402784 master-0 kubenswrapper[8244]: I0318 10:06:55.402721 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:55.402784 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:55.402784 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:55.402784 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:55.404076 master-0 kubenswrapper[8244]: I0318 10:06:55.404024 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:56.402220 master-0 kubenswrapper[8244]: I0318 10:06:56.402128 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:56.402220 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:56.402220 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:56.402220 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:56.402731 master-0 kubenswrapper[8244]: I0318 10:06:56.402243 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:57.401361 master-0 kubenswrapper[8244]: I0318 10:06:57.401285 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:57.401361 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:57.401361 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:57.401361 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:57.402603 master-0 kubenswrapper[8244]: I0318 10:06:57.401390 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:58.400464 master-0 kubenswrapper[8244]: I0318 10:06:58.400414 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:58.400464 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:58.400464 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:58.400464 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:58.400802 master-0 kubenswrapper[8244]: I0318 10:06:58.400474 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:59.401880 master-0 kubenswrapper[8244]: I0318 10:06:59.401774 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:06:59.401880 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:06:59.401880 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:06:59.401880 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:06:59.403000 master-0 kubenswrapper[8244]: I0318 10:06:59.402941 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:06:59.463872 master-0 kubenswrapper[8244]: I0318 10:06:59.463726 8244 status_manager.go:851] "Failed to get status for pod" podUID="af8e875368eec13e995ea08015e08c42" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Mar 18 10:07:00.402368 master-0 kubenswrapper[8244]: I0318 10:07:00.402296 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:00.402368 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:00.402368 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:00.402368 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:00.403339 master-0 kubenswrapper[8244]: I0318 10:07:00.403006 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:01.401967 master-0 kubenswrapper[8244]: I0318 10:07:01.401864 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:01.401967 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:01.401967 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:01.401967 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:01.401967 master-0 kubenswrapper[8244]: I0318 10:07:01.401951 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:02.402348 master-0 kubenswrapper[8244]: I0318 10:07:02.402257 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:02.402348 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:02.402348 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:02.402348 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:02.403334 master-0 kubenswrapper[8244]: I0318 10:07:02.402366 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:03.400983 master-0 kubenswrapper[8244]: I0318 10:07:03.400919 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:03.400983 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:03.400983 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:03.400983 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:03.400983 master-0 kubenswrapper[8244]: I0318 10:07:03.400998 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:04.350447 master-0 kubenswrapper[8244]: E0318 10:07:04.350244 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189de75eeaa28369 openshift-kube-controller-manager 12850 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:af8e875368eec13e995ea08015e08c42,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 10:03:43 +0000 UTC,LastTimestamp:2026-03-18 10:05:02.707553067 +0000 UTC m=+619.187289185,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 10:07:04.401807 master-0 kubenswrapper[8244]: I0318 10:07:04.401694 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:04.401807 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:04.401807 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:04.401807 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:04.401807 master-0 kubenswrapper[8244]: I0318 10:07:04.401794 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:04.968445 master-0 kubenswrapper[8244]: E0318 10:07:04.968311 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 10:07:05.401866 master-0 kubenswrapper[8244]: I0318 10:07:05.401723 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:05.401866 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:05.401866 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:05.401866 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:05.402970 master-0 kubenswrapper[8244]: I0318 10:07:05.401869 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:06.401545 master-0 kubenswrapper[8244]: I0318 10:07:06.401468 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:06.401545 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:06.401545 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:06.401545 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:06.401901 master-0 kubenswrapper[8244]: I0318 10:07:06.401586 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:07.401690 master-0 kubenswrapper[8244]: I0318 10:07:07.401618 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:07.401690 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:07.401690 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:07.401690 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:07.403210 master-0 kubenswrapper[8244]: I0318 10:07:07.403159 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:08.402789 master-0 kubenswrapper[8244]: I0318 10:07:08.402681 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:08.402789 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:08.402789 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:08.402789 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:08.403479 master-0 kubenswrapper[8244]: I0318 10:07:08.402870 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:08.494360 master-0 kubenswrapper[8244]: I0318 10:07:08.494265 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/1.log" Mar 18 10:07:08.495146 master-0 kubenswrapper[8244]: I0318 10:07:08.495094 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/0.log" Mar 18 10:07:08.495344 master-0 kubenswrapper[8244]: I0318 10:07:08.495166 8244 generic.go:334] "Generic (PLEG): container finished" podID="932a70df-3afe-4873-9449-ab6e061d3fe3" containerID="0781313e8cf2b20835b28fe776f7c1e4a2d3726fbdb7ce76e53c1492ed63a933" exitCode=1 Mar 18 10:07:08.495344 master-0 kubenswrapper[8244]: I0318 10:07:08.495218 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" event={"ID":"932a70df-3afe-4873-9449-ab6e061d3fe3","Type":"ContainerDied","Data":"0781313e8cf2b20835b28fe776f7c1e4a2d3726fbdb7ce76e53c1492ed63a933"} Mar 18 10:07:08.495344 master-0 kubenswrapper[8244]: I0318 10:07:08.495297 8244 scope.go:117] "RemoveContainer" containerID="17c5a6d0d57e33e7edf72cf60a77174890881333b1c35130459a5598516f267c" Mar 18 10:07:08.497001 master-0 kubenswrapper[8244]: I0318 10:07:08.496933 8244 scope.go:117] "RemoveContainer" containerID="0781313e8cf2b20835b28fe776f7c1e4a2d3726fbdb7ce76e53c1492ed63a933" Mar 18 10:07:08.499540 master-0 kubenswrapper[8244]: E0318 10:07:08.499467 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-2l6cq_openshift-cluster-storage-operator(932a70df-3afe-4873-9449-ab6e061d3fe3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" podUID="932a70df-3afe-4873-9449-ab6e061d3fe3" Mar 18 10:07:09.403095 master-0 kubenswrapper[8244]: I0318 10:07:09.403013 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:09.403095 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:09.403095 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:09.403095 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:09.404316 master-0 kubenswrapper[8244]: I0318 10:07:09.403113 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:09.507976 master-0 kubenswrapper[8244]: I0318 10:07:09.507878 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/1.log" Mar 18 10:07:10.402994 master-0 kubenswrapper[8244]: I0318 10:07:10.402893 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:10.402994 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:10.402994 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:10.402994 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:10.403947 master-0 kubenswrapper[8244]: I0318 10:07:10.403002 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:10.521007 master-0 kubenswrapper[8244]: I0318 10:07:10.520906 8244 generic.go:334] "Generic (PLEG): container finished" podID="d0605021-862d-424a-a4c1-037fb005b77e" containerID="eb346301fe01e98fabdb59a67db563268a1e2d2d2c9e4e2f98ed640abf5fcf03" exitCode=0 Mar 18 10:07:10.521007 master-0 kubenswrapper[8244]: I0318 10:07:10.521009 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" event={"ID":"d0605021-862d-424a-a4c1-037fb005b77e","Type":"ContainerDied","Data":"eb346301fe01e98fabdb59a67db563268a1e2d2d2c9e4e2f98ed640abf5fcf03"} Mar 18 10:07:10.522688 master-0 kubenswrapper[8244]: I0318 10:07:10.522629 8244 scope.go:117] "RemoveContainer" containerID="eb346301fe01e98fabdb59a67db563268a1e2d2d2c9e4e2f98ed640abf5fcf03" Mar 18 10:07:10.525151 master-0 kubenswrapper[8244]: I0318 10:07:10.524971 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-zcm5j_f88c2a18-11f5-45ef-aff1-3c5976716d85/control-plane-machine-set-operator/0.log" Mar 18 10:07:10.525211 master-0 kubenswrapper[8244]: I0318 10:07:10.525152 8244 generic.go:334] "Generic (PLEG): container finished" podID="f88c2a18-11f5-45ef-aff1-3c5976716d85" containerID="d77d62684d3696a69a4baad8521b7beec7ec234f5d636741ff18bfd6906b5683" exitCode=1 Mar 18 10:07:10.525272 master-0 kubenswrapper[8244]: I0318 10:07:10.525222 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" event={"ID":"f88c2a18-11f5-45ef-aff1-3c5976716d85","Type":"ContainerDied","Data":"d77d62684d3696a69a4baad8521b7beec7ec234f5d636741ff18bfd6906b5683"} Mar 18 10:07:10.526236 master-0 kubenswrapper[8244]: I0318 10:07:10.526179 8244 scope.go:117] "RemoveContainer" containerID="d77d62684d3696a69a4baad8521b7beec7ec234f5d636741ff18bfd6906b5683" Mar 18 10:07:11.402077 master-0 kubenswrapper[8244]: I0318 10:07:11.402005 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:11.402077 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:11.402077 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:11.402077 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:11.402634 master-0 kubenswrapper[8244]: I0318 10:07:11.402582 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:11.537696 master-0 kubenswrapper[8244]: I0318 10:07:11.537611 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-zcm5j_f88c2a18-11f5-45ef-aff1-3c5976716d85/control-plane-machine-set-operator/0.log" Mar 18 10:07:11.538606 master-0 kubenswrapper[8244]: I0318 10:07:11.537756 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" event={"ID":"f88c2a18-11f5-45ef-aff1-3c5976716d85","Type":"ContainerStarted","Data":"1edb9db8332d352022088fd5f80630307e7e879fa905c98ea3510f281646bc20"} Mar 18 10:07:11.541565 master-0 kubenswrapper[8244]: I0318 10:07:11.541519 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" event={"ID":"d0605021-862d-424a-a4c1-037fb005b77e","Type":"ContainerStarted","Data":"87d4b6994c10b2fef92ad8d63f7182bd65099307b7987023e9fa946d5a050594"} Mar 18 10:07:11.544273 master-0 kubenswrapper[8244]: I0318 10:07:11.544237 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/0.log" Mar 18 10:07:11.544553 master-0 kubenswrapper[8244]: I0318 10:07:11.544512 8244 generic.go:334] "Generic (PLEG): container finished" podID="1084562a-20a0-432d-b739-90bc0a4daff2" containerID="1ecb36ab1ea5528a80738edf9a38359cd4af84dcf07cd0edebf601529c05c59e" exitCode=1 Mar 18 10:07:11.544742 master-0 kubenswrapper[8244]: I0318 10:07:11.544639 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" event={"ID":"1084562a-20a0-432d-b739-90bc0a4daff2","Type":"ContainerDied","Data":"1ecb36ab1ea5528a80738edf9a38359cd4af84dcf07cd0edebf601529c05c59e"} Mar 18 10:07:11.545523 master-0 kubenswrapper[8244]: I0318 10:07:11.545498 8244 scope.go:117] "RemoveContainer" containerID="1ecb36ab1ea5528a80738edf9a38359cd4af84dcf07cd0edebf601529c05c59e" Mar 18 10:07:12.218112 master-0 kubenswrapper[8244]: E0318 10:07:12.217990 8244 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 10:07:12.401854 master-0 kubenswrapper[8244]: I0318 10:07:12.401769 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:12.401854 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:12.401854 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:12.401854 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:12.402199 master-0 kubenswrapper[8244]: I0318 10:07:12.401890 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:12.559220 master-0 kubenswrapper[8244]: I0318 10:07:12.559166 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-95jvh_1ad4aa30-f7d5-47ca-b01e-2643f7195685/machine-approver-controller/0.log" Mar 18 10:07:12.559926 master-0 kubenswrapper[8244]: I0318 10:07:12.559865 8244 generic.go:334] "Generic (PLEG): container finished" podID="1ad4aa30-f7d5-47ca-b01e-2643f7195685" containerID="989ed9d1224874eccaf2482bae9307a2390fd6b1f5f7b0d51c60b2a5d20c283b" exitCode=255 Mar 18 10:07:12.560029 master-0 kubenswrapper[8244]: I0318 10:07:12.559912 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" event={"ID":"1ad4aa30-f7d5-47ca-b01e-2643f7195685","Type":"ContainerDied","Data":"989ed9d1224874eccaf2482bae9307a2390fd6b1f5f7b0d51c60b2a5d20c283b"} Mar 18 10:07:12.560439 master-0 kubenswrapper[8244]: I0318 10:07:12.560396 8244 scope.go:117] "RemoveContainer" containerID="989ed9d1224874eccaf2482bae9307a2390fd6b1f5f7b0d51c60b2a5d20c283b" Mar 18 10:07:12.564352 master-0 kubenswrapper[8244]: I0318 10:07:12.564308 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/0.log" Mar 18 10:07:12.564449 master-0 kubenswrapper[8244]: I0318 10:07:12.564370 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" event={"ID":"1084562a-20a0-432d-b739-90bc0a4daff2","Type":"ContainerStarted","Data":"03c3238566614a72d16f19efa6573730668f43fd6aaa0c99dec1d35ce1b607ad"} Mar 18 10:07:13.401969 master-0 kubenswrapper[8244]: I0318 10:07:13.401917 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:13.401969 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:13.401969 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:13.401969 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:13.402654 master-0 kubenswrapper[8244]: I0318 10:07:13.402614 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:13.577582 master-0 kubenswrapper[8244]: I0318 10:07:13.577534 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-95jvh_1ad4aa30-f7d5-47ca-b01e-2643f7195685/machine-approver-controller/0.log" Mar 18 10:07:13.578489 master-0 kubenswrapper[8244]: I0318 10:07:13.578116 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" event={"ID":"1ad4aa30-f7d5-47ca-b01e-2643f7195685","Type":"ContainerStarted","Data":"fbe8ceb5da2c1564666a9f165fee301025ee0653509af777c1ef9c48d328f315"} Mar 18 10:07:13.582107 master-0 kubenswrapper[8244]: I0318 10:07:13.582042 8244 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="4399c846d156fc9ec273e7482a7df69bd6d7ebd35bceea9ea824c44fc0dbb98b" exitCode=0 Mar 18 10:07:13.582285 master-0 kubenswrapper[8244]: I0318 10:07:13.582124 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"4399c846d156fc9ec273e7482a7df69bd6d7ebd35bceea9ea824c44fc0dbb98b"} Mar 18 10:07:13.582585 master-0 kubenswrapper[8244]: I0318 10:07:13.582535 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:07:13.582585 master-0 kubenswrapper[8244]: I0318 10:07:13.582585 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:07:14.466672 master-0 kubenswrapper[8244]: I0318 10:07:14.465527 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:14.466672 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:14.466672 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:14.466672 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:14.466672 master-0 kubenswrapper[8244]: I0318 10:07:14.465631 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:15.402744 master-0 kubenswrapper[8244]: I0318 10:07:15.402637 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:15.402744 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:15.402744 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:15.402744 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:15.403765 master-0 kubenswrapper[8244]: I0318 10:07:15.402744 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:15.613522 master-0 kubenswrapper[8244]: I0318 10:07:15.613455 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/4.log" Mar 18 10:07:15.615262 master-0 kubenswrapper[8244]: I0318 10:07:15.615226 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/3.log" Mar 18 10:07:15.615671 master-0 kubenswrapper[8244]: I0318 10:07:15.615620 8244 generic.go:334] "Generic (PLEG): container finished" podID="accc57fb-75f5-4f89-9804-6ede7f77e27c" containerID="ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c" exitCode=1 Mar 18 10:07:15.615760 master-0 kubenswrapper[8244]: I0318 10:07:15.615679 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerDied","Data":"ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c"} Mar 18 10:07:15.615760 master-0 kubenswrapper[8244]: I0318 10:07:15.615721 8244 scope.go:117] "RemoveContainer" containerID="19028a9b74d8fde675db8214cb7dc59516cd57bb8937a1e369ea219dd5ad277c" Mar 18 10:07:15.616619 master-0 kubenswrapper[8244]: I0318 10:07:15.616575 8244 scope.go:117] "RemoveContainer" containerID="ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c" Mar 18 10:07:15.616989 master-0 kubenswrapper[8244]: E0318 10:07:15.616945 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:07:16.402698 master-0 kubenswrapper[8244]: I0318 10:07:16.402576 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:16.402698 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:16.402698 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:16.402698 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:16.402698 master-0 kubenswrapper[8244]: I0318 10:07:16.402681 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:16.625743 master-0 kubenswrapper[8244]: I0318 10:07:16.625672 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/4.log" Mar 18 10:07:17.401781 master-0 kubenswrapper[8244]: I0318 10:07:17.401718 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:17.401781 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:17.401781 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:17.401781 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:17.402102 master-0 kubenswrapper[8244]: I0318 10:07:17.401853 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:18.401778 master-0 kubenswrapper[8244]: I0318 10:07:18.401585 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:18.401778 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:18.401778 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:18.401778 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:18.401778 master-0 kubenswrapper[8244]: I0318 10:07:18.401659 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:19.401798 master-0 kubenswrapper[8244]: I0318 10:07:19.401693 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:19.401798 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:19.401798 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:19.401798 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:19.401798 master-0 kubenswrapper[8244]: I0318 10:07:19.401780 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:20.401411 master-0 kubenswrapper[8244]: I0318 10:07:20.401320 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:20.401411 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:20.401411 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:20.401411 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:20.401920 master-0 kubenswrapper[8244]: I0318 10:07:20.401425 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:21.401523 master-0 kubenswrapper[8244]: I0318 10:07:21.401458 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:21.401523 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:21.401523 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:21.401523 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:21.401523 master-0 kubenswrapper[8244]: I0318 10:07:21.401520 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:21.733523 master-0 kubenswrapper[8244]: I0318 10:07:21.733300 8244 scope.go:117] "RemoveContainer" containerID="0781313e8cf2b20835b28fe776f7c1e4a2d3726fbdb7ce76e53c1492ed63a933" Mar 18 10:07:21.969353 master-0 kubenswrapper[8244]: E0318 10:07:21.969272 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 10:07:22.401095 master-0 kubenswrapper[8244]: I0318 10:07:22.401000 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:22.401095 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:22.401095 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:22.401095 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:22.401095 master-0 kubenswrapper[8244]: I0318 10:07:22.401072 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:22.676601 master-0 kubenswrapper[8244]: I0318 10:07:22.676437 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/1.log" Mar 18 10:07:22.676601 master-0 kubenswrapper[8244]: I0318 10:07:22.676509 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" event={"ID":"932a70df-3afe-4873-9449-ab6e061d3fe3","Type":"ContainerStarted","Data":"36236a2564cf668e8cea6a27fa0d29c4d06205c458f2212a8b31579a80f6f1ed"} Mar 18 10:07:23.401043 master-0 kubenswrapper[8244]: I0318 10:07:23.400931 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:23.401043 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:23.401043 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:23.401043 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:23.401043 master-0 kubenswrapper[8244]: I0318 10:07:23.401035 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:23.687069 master-0 kubenswrapper[8244]: I0318 10:07:23.686887 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:07:23.687069 master-0 kubenswrapper[8244]: I0318 10:07:23.686988 8244 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="922d668e986d6aa98fbec9295267ac1f43fd0061254b070e0f57e9b922e66793" exitCode=0 Mar 18 10:07:23.687069 master-0 kubenswrapper[8244]: I0318 10:07:23.687026 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerDied","Data":"922d668e986d6aa98fbec9295267ac1f43fd0061254b070e0f57e9b922e66793"} Mar 18 10:07:23.687856 master-0 kubenswrapper[8244]: I0318 10:07:23.687631 8244 scope.go:117] "RemoveContainer" containerID="922d668e986d6aa98fbec9295267ac1f43fd0061254b070e0f57e9b922e66793" Mar 18 10:07:24.401968 master-0 kubenswrapper[8244]: I0318 10:07:24.401902 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:24.401968 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:24.401968 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:24.401968 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:24.402516 master-0 kubenswrapper[8244]: I0318 10:07:24.402474 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:24.705957 master-0 kubenswrapper[8244]: I0318 10:07:24.705048 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:07:24.705957 master-0 kubenswrapper[8244]: I0318 10:07:24.705109 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"b81d2972d1f40f8e145d9c3461f6c024efa3418434dfbe9ad3720ec95f64f5a9"} Mar 18 10:07:25.402163 master-0 kubenswrapper[8244]: I0318 10:07:25.402080 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:25.402163 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:25.402163 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:25.402163 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:25.402609 master-0 kubenswrapper[8244]: I0318 10:07:25.402182 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:26.405280 master-0 kubenswrapper[8244]: I0318 10:07:26.405194 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:26.405280 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:26.405280 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:26.405280 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:26.409600 master-0 kubenswrapper[8244]: I0318 10:07:26.406588 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:26.726309 master-0 kubenswrapper[8244]: I0318 10:07:26.726164 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler/0.log" Mar 18 10:07:26.727029 master-0 kubenswrapper[8244]: I0318 10:07:26.726988 8244 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="c508677fa84c67b31ad63db19f2ce6332119259b51c9ae7aa95d7b13079c3837" exitCode=1 Mar 18 10:07:26.727130 master-0 kubenswrapper[8244]: I0318 10:07:26.727046 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerDied","Data":"c508677fa84c67b31ad63db19f2ce6332119259b51c9ae7aa95d7b13079c3837"} Mar 18 10:07:26.727780 master-0 kubenswrapper[8244]: I0318 10:07:26.727750 8244 scope.go:117] "RemoveContainer" containerID="c508677fa84c67b31ad63db19f2ce6332119259b51c9ae7aa95d7b13079c3837" Mar 18 10:07:27.402260 master-0 kubenswrapper[8244]: I0318 10:07:27.402188 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:27.402260 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:27.402260 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:27.402260 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:27.402723 master-0 kubenswrapper[8244]: I0318 10:07:27.402269 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:27.734084 master-0 kubenswrapper[8244]: I0318 10:07:27.733935 8244 scope.go:117] "RemoveContainer" containerID="ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c" Mar 18 10:07:27.734926 master-0 kubenswrapper[8244]: E0318 10:07:27.734564 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:07:27.742800 master-0 kubenswrapper[8244]: I0318 10:07:27.742722 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler/0.log" Mar 18 10:07:27.747694 master-0 kubenswrapper[8244]: I0318 10:07:27.747628 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"0f4bf1dfc4a190fd3410aa065645689966e325eb73cf7788b53ae0a9bf57f3cc"} Mar 18 10:07:27.748525 master-0 kubenswrapper[8244]: I0318 10:07:27.748461 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:07:28.401414 master-0 kubenswrapper[8244]: I0318 10:07:28.401350 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:28.401414 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:28.401414 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:28.401414 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:28.402039 master-0 kubenswrapper[8244]: I0318 10:07:28.401981 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:29.402614 master-0 kubenswrapper[8244]: I0318 10:07:29.402520 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:29.402614 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:29.402614 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:29.402614 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:29.403652 master-0 kubenswrapper[8244]: I0318 10:07:29.402613 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:30.402168 master-0 kubenswrapper[8244]: I0318 10:07:30.402056 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:30.402168 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:30.402168 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:30.402168 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:30.403433 master-0 kubenswrapper[8244]: I0318 10:07:30.402175 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:31.403214 master-0 kubenswrapper[8244]: I0318 10:07:31.403111 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:31.403214 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:31.403214 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:31.403214 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:31.403972 master-0 kubenswrapper[8244]: I0318 10:07:31.403242 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:32.400768 master-0 kubenswrapper[8244]: I0318 10:07:32.400681 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:32.400768 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:32.400768 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:32.400768 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:32.401138 master-0 kubenswrapper[8244]: I0318 10:07:32.400765 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:33.424346 master-0 kubenswrapper[8244]: I0318 10:07:33.418332 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:07:33.424346 master-0 kubenswrapper[8244]: I0318 10:07:33.418376 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:07:33.443330 master-0 kubenswrapper[8244]: I0318 10:07:33.440859 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:33.443330 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:33.443330 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:33.443330 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:33.443330 master-0 kubenswrapper[8244]: I0318 10:07:33.440943 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:34.402537 master-0 kubenswrapper[8244]: I0318 10:07:34.402451 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:34.402537 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:34.402537 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:34.402537 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:34.403100 master-0 kubenswrapper[8244]: I0318 10:07:34.402567 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:35.401908 master-0 kubenswrapper[8244]: I0318 10:07:35.401816 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:35.401908 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:35.401908 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:35.401908 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:35.402907 master-0 kubenswrapper[8244]: I0318 10:07:35.401925 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:35.450028 master-0 kubenswrapper[8244]: I0318 10:07:35.449970 8244 generic.go:334] "Generic (PLEG): container finished" podID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerID="6959115a6f11e9fd2881ca4214b94da71213aad3f3ef00ebec36ed62d0816399" exitCode=0 Mar 18 10:07:35.450028 master-0 kubenswrapper[8244]: I0318 10:07:35.450019 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" event={"ID":"9fc664ff-2e8f-441d-82dc-8f21c1d362d7","Type":"ContainerDied","Data":"6959115a6f11e9fd2881ca4214b94da71213aad3f3ef00ebec36ed62d0816399"} Mar 18 10:07:35.450556 master-0 kubenswrapper[8244]: I0318 10:07:35.450412 8244 scope.go:117] "RemoveContainer" containerID="6959115a6f11e9fd2881ca4214b94da71213aad3f3ef00ebec36ed62d0816399" Mar 18 10:07:36.401873 master-0 kubenswrapper[8244]: I0318 10:07:36.401729 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:36.401873 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:36.401873 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:36.401873 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:36.401873 master-0 kubenswrapper[8244]: I0318 10:07:36.401868 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:36.416029 master-0 kubenswrapper[8244]: I0318 10:07:36.415953 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:07:36.416302 master-0 kubenswrapper[8244]: I0318 10:07:36.416036 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:07:36.462162 master-0 kubenswrapper[8244]: I0318 10:07:36.462082 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" event={"ID":"9fc664ff-2e8f-441d-82dc-8f21c1d362d7","Type":"ContainerStarted","Data":"1c0004ed0ea941f68b537e3f18e4eff3370d5b413fdcbd5d92b3955c2e83f6ad"} Mar 18 10:07:36.462629 master-0 kubenswrapper[8244]: I0318 10:07:36.462539 8244 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" containerID="cri-o://6959115a6f11e9fd2881ca4214b94da71213aad3f3ef00ebec36ed62d0816399" Mar 18 10:07:36.462629 master-0 kubenswrapper[8244]: I0318 10:07:36.462622 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:07:37.328047 master-0 kubenswrapper[8244]: E0318 10:07:37.327883 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:07:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:07:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:07:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:07:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:07:37.402386 master-0 kubenswrapper[8244]: I0318 10:07:37.402242 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:37.402386 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:37.402386 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:37.402386 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:37.402386 master-0 kubenswrapper[8244]: I0318 10:07:37.402377 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:37.470010 master-0 kubenswrapper[8244]: I0318 10:07:37.469913 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:07:37.475300 master-0 kubenswrapper[8244]: I0318 10:07:37.475241 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:07:38.365397 master-0 kubenswrapper[8244]: E0318 10:07:38.365231 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-7fl4x.189de77a116c624c openshift-network-node-identity 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-7fl4x,UID:bb942756-bac7-414d-b179-cebdce588a13,APIVersion:v1,ResourceVersion:3316,FieldPath:spec.containers{approver},},Reason:BackOff,Message:Back-off restarting failed container approver in pod network-node-identity-7fl4x_openshift-network-node-identity(bb942756-bac7-414d-b179-cebdce588a13),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 10:05:39.718185548 +0000 UTC m=+656.197921706,LastTimestamp:2026-03-18 10:05:39.718185548 +0000 UTC m=+656.197921706,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 10:07:38.401032 master-0 kubenswrapper[8244]: I0318 10:07:38.400941 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:38.401032 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:38.401032 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:38.401032 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:38.401032 master-0 kubenswrapper[8244]: I0318 10:07:38.400995 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:38.971018 master-0 kubenswrapper[8244]: E0318 10:07:38.970903 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 10:07:39.401343 master-0 kubenswrapper[8244]: I0318 10:07:39.401268 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:39.401343 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:39.401343 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:39.401343 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:39.401941 master-0 kubenswrapper[8244]: I0318 10:07:39.401370 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:40.402490 master-0 kubenswrapper[8244]: I0318 10:07:40.402400 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:40.402490 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:40.402490 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:40.402490 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:40.402490 master-0 kubenswrapper[8244]: I0318 10:07:40.402476 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:40.733466 master-0 kubenswrapper[8244]: I0318 10:07:40.733293 8244 scope.go:117] "RemoveContainer" containerID="ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c" Mar 18 10:07:40.733894 master-0 kubenswrapper[8244]: E0318 10:07:40.733787 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:07:41.402017 master-0 kubenswrapper[8244]: I0318 10:07:41.401923 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:41.402017 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:41.402017 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:41.402017 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:41.402017 master-0 kubenswrapper[8244]: I0318 10:07:41.402007 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:42.401154 master-0 kubenswrapper[8244]: I0318 10:07:42.401057 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:42.401154 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:42.401154 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:42.401154 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:42.401154 master-0 kubenswrapper[8244]: I0318 10:07:42.401156 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:43.402368 master-0 kubenswrapper[8244]: I0318 10:07:43.402245 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:43.402368 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:43.402368 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:43.402368 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:43.402368 master-0 kubenswrapper[8244]: I0318 10:07:43.402346 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:44.402451 master-0 kubenswrapper[8244]: I0318 10:07:44.402366 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:44.402451 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:44.402451 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:44.402451 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:44.402451 master-0 kubenswrapper[8244]: I0318 10:07:44.402446 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:45.401918 master-0 kubenswrapper[8244]: I0318 10:07:45.401809 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:45.401918 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:45.401918 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:45.401918 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:45.402376 master-0 kubenswrapper[8244]: I0318 10:07:45.401927 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:45.813487 master-0 kubenswrapper[8244]: I0318 10:07:45.813379 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:07:45.814143 master-0 kubenswrapper[8244]: I0318 10:07:45.813504 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:07:46.401886 master-0 kubenswrapper[8244]: I0318 10:07:46.401723 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:46.401886 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:46.401886 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:46.401886 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:46.401886 master-0 kubenswrapper[8244]: I0318 10:07:46.401854 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:47.328491 master-0 kubenswrapper[8244]: E0318 10:07:47.328419 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:07:47.402467 master-0 kubenswrapper[8244]: I0318 10:07:47.402386 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:47.402467 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:47.402467 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:47.402467 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:47.402467 master-0 kubenswrapper[8244]: I0318 10:07:47.402460 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:47.586388 master-0 kubenswrapper[8244]: E0318 10:07:47.586171 8244 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 10:07:48.401488 master-0 kubenswrapper[8244]: I0318 10:07:48.401417 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:48.401488 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:48.401488 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:48.401488 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:48.402458 master-0 kubenswrapper[8244]: I0318 10:07:48.401495 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:48.559047 master-0 kubenswrapper[8244]: I0318 10:07:48.558962 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"8ab6de4ab6f7e15d15c92c129b4e4f727b4794a9b9d9c8fd458199859bb80c35"} Mar 18 10:07:48.559047 master-0 kubenswrapper[8244]: I0318 10:07:48.559014 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"8edce4e71cecfae4457a35520658e712853fe5f7943d0341fb4cb9cb34b170ac"} Mar 18 10:07:48.559047 master-0 kubenswrapper[8244]: I0318 10:07:48.559027 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"6dda24880c260c4a49380224f82bd0302255a57a9081e30246f7376aa462edaf"} Mar 18 10:07:49.402377 master-0 kubenswrapper[8244]: I0318 10:07:49.402295 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:49.402377 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:49.402377 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:49.402377 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:49.403494 master-0 kubenswrapper[8244]: I0318 10:07:49.402387 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:49.577483 master-0 kubenswrapper[8244]: I0318 10:07:49.577374 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"3d919231c945d2ac76a2314ac90b86daaf0c5723053a078a52a777095897804e"} Mar 18 10:07:49.577483 master-0 kubenswrapper[8244]: I0318 10:07:49.577460 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"2cedaa526f8077c080292a77549e88acf42196916ed5bec8faa88ce6a3333a29"} Mar 18 10:07:49.578010 master-0 kubenswrapper[8244]: I0318 10:07:49.577952 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:07:49.578010 master-0 kubenswrapper[8244]: I0318 10:07:49.577995 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:07:50.402722 master-0 kubenswrapper[8244]: I0318 10:07:50.402633 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:50.402722 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:50.402722 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:50.402722 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:50.402722 master-0 kubenswrapper[8244]: I0318 10:07:50.402718 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:51.401307 master-0 kubenswrapper[8244]: I0318 10:07:51.401241 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:51.401307 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:51.401307 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:51.401307 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:51.401307 master-0 kubenswrapper[8244]: I0318 10:07:51.401302 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:51.764936 master-0 kubenswrapper[8244]: I0318 10:07:51.764720 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 10:07:51.764936 master-0 kubenswrapper[8244]: I0318 10:07:51.764805 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 10:07:52.402294 master-0 kubenswrapper[8244]: I0318 10:07:52.402193 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:52.402294 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:52.402294 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:52.402294 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:52.403006 master-0 kubenswrapper[8244]: I0318 10:07:52.402305 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:52.605049 master-0 kubenswrapper[8244]: I0318 10:07:52.604985 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/2.log" Mar 18 10:07:52.606180 master-0 kubenswrapper[8244]: I0318 10:07:52.606124 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/1.log" Mar 18 10:07:52.606297 master-0 kubenswrapper[8244]: I0318 10:07:52.606264 8244 generic.go:334] "Generic (PLEG): container finished" podID="932a70df-3afe-4873-9449-ab6e061d3fe3" containerID="36236a2564cf668e8cea6a27fa0d29c4d06205c458f2212a8b31579a80f6f1ed" exitCode=1 Mar 18 10:07:52.606355 master-0 kubenswrapper[8244]: I0318 10:07:52.606325 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" event={"ID":"932a70df-3afe-4873-9449-ab6e061d3fe3","Type":"ContainerDied","Data":"36236a2564cf668e8cea6a27fa0d29c4d06205c458f2212a8b31579a80f6f1ed"} Mar 18 10:07:52.606429 master-0 kubenswrapper[8244]: I0318 10:07:52.606387 8244 scope.go:117] "RemoveContainer" containerID="0781313e8cf2b20835b28fe776f7c1e4a2d3726fbdb7ce76e53c1492ed63a933" Mar 18 10:07:52.607277 master-0 kubenswrapper[8244]: I0318 10:07:52.607210 8244 scope.go:117] "RemoveContainer" containerID="36236a2564cf668e8cea6a27fa0d29c4d06205c458f2212a8b31579a80f6f1ed" Mar 18 10:07:52.607671 master-0 kubenswrapper[8244]: E0318 10:07:52.607613 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-2l6cq_openshift-cluster-storage-operator(932a70df-3afe-4873-9449-ab6e061d3fe3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" podUID="932a70df-3afe-4873-9449-ab6e061d3fe3" Mar 18 10:07:53.402250 master-0 kubenswrapper[8244]: I0318 10:07:53.402100 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:53.402250 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:53.402250 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:53.402250 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:53.403439 master-0 kubenswrapper[8244]: I0318 10:07:53.403095 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:53.616477 master-0 kubenswrapper[8244]: I0318 10:07:53.616397 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/2.log" Mar 18 10:07:54.402073 master-0 kubenswrapper[8244]: I0318 10:07:54.401968 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:54.402073 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:54.402073 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:54.402073 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:54.403222 master-0 kubenswrapper[8244]: I0318 10:07:54.402082 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:54.526613 master-0 kubenswrapper[8244]: I0318 10:07:54.526550 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:43460->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 18 10:07:54.526859 master-0 kubenswrapper[8244]: I0318 10:07:54.526636 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:43460->127.0.0.1:10357: read: connection reset by peer" Mar 18 10:07:54.526859 master-0 kubenswrapper[8244]: I0318 10:07:54.526703 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:07:54.527809 master-0 kubenswrapper[8244]: I0318 10:07:54.527662 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"b81d2972d1f40f8e145d9c3461f6c024efa3418434dfbe9ad3720ec95f64f5a9"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 10:07:54.527809 master-0 kubenswrapper[8244]: I0318 10:07:54.527787 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" containerID="cri-o://b81d2972d1f40f8e145d9c3461f6c024efa3418434dfbe9ad3720ec95f64f5a9" gracePeriod=30 Mar 18 10:07:54.631641 master-0 kubenswrapper[8244]: I0318 10:07:54.631582 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/1.log" Mar 18 10:07:54.635812 master-0 kubenswrapper[8244]: I0318 10:07:54.635748 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:07:54.635950 master-0 kubenswrapper[8244]: I0318 10:07:54.635897 8244 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="b81d2972d1f40f8e145d9c3461f6c024efa3418434dfbe9ad3720ec95f64f5a9" exitCode=255 Mar 18 10:07:54.636004 master-0 kubenswrapper[8244]: I0318 10:07:54.635965 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerDied","Data":"b81d2972d1f40f8e145d9c3461f6c024efa3418434dfbe9ad3720ec95f64f5a9"} Mar 18 10:07:54.636071 master-0 kubenswrapper[8244]: I0318 10:07:54.636043 8244 scope.go:117] "RemoveContainer" containerID="922d668e986d6aa98fbec9295267ac1f43fd0061254b070e0f57e9b922e66793" Mar 18 10:07:54.738941 master-0 kubenswrapper[8244]: I0318 10:07:54.738846 8244 scope.go:117] "RemoveContainer" containerID="ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c" Mar 18 10:07:54.739291 master-0 kubenswrapper[8244]: E0318 10:07:54.739253 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:07:55.402998 master-0 kubenswrapper[8244]: I0318 10:07:55.402893 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:55.402998 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:55.402998 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:55.402998 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:55.403521 master-0 kubenswrapper[8244]: I0318 10:07:55.403014 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:55.668520 master-0 kubenswrapper[8244]: I0318 10:07:55.668370 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/1.log" Mar 18 10:07:55.670650 master-0 kubenswrapper[8244]: I0318 10:07:55.670602 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:07:55.670811 master-0 kubenswrapper[8244]: I0318 10:07:55.670662 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"34909282c33ce536a4d9c6eacbb108eac29a41b88e2973ba68855234c3ed4ad6"} Mar 18 10:07:55.972340 master-0 kubenswrapper[8244]: E0318 10:07:55.972116 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 10:07:56.408721 master-0 kubenswrapper[8244]: I0318 10:07:56.408633 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:56.408721 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:56.408721 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:56.408721 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:56.409488 master-0 kubenswrapper[8244]: I0318 10:07:56.408727 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:57.329571 master-0 kubenswrapper[8244]: E0318 10:07:57.329473 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:07:57.402350 master-0 kubenswrapper[8244]: I0318 10:07:57.402224 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:57.402350 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:57.402350 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:57.402350 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:57.402350 master-0 kubenswrapper[8244]: I0318 10:07:57.402306 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:58.403839 master-0 kubenswrapper[8244]: I0318 10:07:58.403740 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:58.403839 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:58.403839 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:58.403839 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:58.404472 master-0 kubenswrapper[8244]: I0318 10:07:58.403883 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:59.401409 master-0 kubenswrapper[8244]: I0318 10:07:59.401348 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:07:59.401409 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:07:59.401409 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:07:59.401409 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:07:59.402208 master-0 kubenswrapper[8244]: I0318 10:07:59.402091 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:07:59.465328 master-0 kubenswrapper[8244]: I0318 10:07:59.465247 8244 status_manager.go:851] "Failed to get status for pod" podUID="2610d88e-f450-455a-9db5-dc59c1d97bf4" pod="openshift-kube-apiserver/installer-3-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-3-master-0)" Mar 18 10:08:00.402061 master-0 kubenswrapper[8244]: I0318 10:08:00.401973 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:00.402061 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:00.402061 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:00.402061 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:00.402545 master-0 kubenswrapper[8244]: I0318 10:08:00.402083 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:01.408607 master-0 kubenswrapper[8244]: I0318 10:08:01.408498 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:01.408607 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:01.408607 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:01.408607 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:01.409627 master-0 kubenswrapper[8244]: I0318 10:08:01.408610 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:01.801565 master-0 kubenswrapper[8244]: I0318 10:08:01.801484 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 10:08:02.402680 master-0 kubenswrapper[8244]: I0318 10:08:02.402592 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:02.402680 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:02.402680 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:02.402680 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:02.403166 master-0 kubenswrapper[8244]: I0318 10:08:02.402693 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:02.813508 master-0 kubenswrapper[8244]: I0318 10:08:02.813399 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:08:02.813508 master-0 kubenswrapper[8244]: I0318 10:08:02.813499 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:08:03.400759 master-0 kubenswrapper[8244]: I0318 10:08:03.400675 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:03.400759 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:03.400759 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:03.400759 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:03.401127 master-0 kubenswrapper[8244]: I0318 10:08:03.400795 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:03.734005 master-0 kubenswrapper[8244]: I0318 10:08:03.733859 8244 scope.go:117] "RemoveContainer" containerID="36236a2564cf668e8cea6a27fa0d29c4d06205c458f2212a8b31579a80f6f1ed" Mar 18 10:08:03.734991 master-0 kubenswrapper[8244]: E0318 10:08:03.734939 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-2l6cq_openshift-cluster-storage-operator(932a70df-3afe-4873-9449-ab6e061d3fe3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" podUID="932a70df-3afe-4873-9449-ab6e061d3fe3" Mar 18 10:08:04.402889 master-0 kubenswrapper[8244]: I0318 10:08:04.402786 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:04.402889 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:04.402889 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:04.402889 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:04.403983 master-0 kubenswrapper[8244]: I0318 10:08:04.402915 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:05.404875 master-0 kubenswrapper[8244]: I0318 10:08:05.404728 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:05.404875 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:05.404875 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:05.404875 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:05.404875 master-0 kubenswrapper[8244]: I0318 10:08:05.404857 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:05.814426 master-0 kubenswrapper[8244]: I0318 10:08:05.814294 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:08:05.814810 master-0 kubenswrapper[8244]: I0318 10:08:05.814441 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:06.401932 master-0 kubenswrapper[8244]: I0318 10:08:06.401816 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:06.401932 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:06.401932 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:06.401932 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:06.403021 master-0 kubenswrapper[8244]: I0318 10:08:06.401949 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:06.808576 master-0 kubenswrapper[8244]: I0318 10:08:06.808427 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 10:08:07.331296 master-0 kubenswrapper[8244]: E0318 10:08:07.331196 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:07.401603 master-0 kubenswrapper[8244]: I0318 10:08:07.401487 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:07.401603 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:07.401603 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:07.401603 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:07.402108 master-0 kubenswrapper[8244]: I0318 10:08:07.401595 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:08.401521 master-0 kubenswrapper[8244]: I0318 10:08:08.401419 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:08.401521 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:08.401521 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:08.401521 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:08.401521 master-0 kubenswrapper[8244]: I0318 10:08:08.401496 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:09.402188 master-0 kubenswrapper[8244]: I0318 10:08:09.402039 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:09.402188 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:09.402188 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:09.402188 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:09.402188 master-0 kubenswrapper[8244]: I0318 10:08:09.402162 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:09.734114 master-0 kubenswrapper[8244]: I0318 10:08:09.733956 8244 scope.go:117] "RemoveContainer" containerID="ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c" Mar 18 10:08:09.734543 master-0 kubenswrapper[8244]: E0318 10:08:09.734188 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:08:10.401629 master-0 kubenswrapper[8244]: I0318 10:08:10.401542 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:10.401629 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:10.401629 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:10.401629 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:10.401979 master-0 kubenswrapper[8244]: I0318 10:08:10.401657 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:11.401661 master-0 kubenswrapper[8244]: I0318 10:08:11.401566 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:11.401661 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:11.401661 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:11.401661 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:11.402778 master-0 kubenswrapper[8244]: I0318 10:08:11.401707 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:12.370047 master-0 kubenswrapper[8244]: E0318 10:08:12.369754 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-7fl4x.189de700033f1f71 openshift-network-node-identity 8567 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-7fl4x,UID:bb942756-bac7-414d-b179-cebdce588a13,APIVersion:v1,ResourceVersion:3316,FieldPath:spec.containers{approver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:56:55 +0000 UTC,LastTimestamp:2026-03-18 10:05:53.73897558 +0000 UTC m=+670.218711748,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 10:08:12.401587 master-0 kubenswrapper[8244]: I0318 10:08:12.401477 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:12.401587 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:12.401587 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:12.401587 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:12.401587 master-0 kubenswrapper[8244]: I0318 10:08:12.401559 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:12.819743 master-0 kubenswrapper[8244]: I0318 10:08:12.819648 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/1.log" Mar 18 10:08:12.821360 master-0 kubenswrapper[8244]: I0318 10:08:12.821295 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/0.log" Mar 18 10:08:12.821549 master-0 kubenswrapper[8244]: I0318 10:08:12.821371 8244 generic.go:334] "Generic (PLEG): container finished" podID="1084562a-20a0-432d-b739-90bc0a4daff2" containerID="03c3238566614a72d16f19efa6573730668f43fd6aaa0c99dec1d35ce1b607ad" exitCode=1 Mar 18 10:08:12.821549 master-0 kubenswrapper[8244]: I0318 10:08:12.821425 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" event={"ID":"1084562a-20a0-432d-b739-90bc0a4daff2","Type":"ContainerDied","Data":"03c3238566614a72d16f19efa6573730668f43fd6aaa0c99dec1d35ce1b607ad"} Mar 18 10:08:12.821549 master-0 kubenswrapper[8244]: I0318 10:08:12.821488 8244 scope.go:117] "RemoveContainer" containerID="1ecb36ab1ea5528a80738edf9a38359cd4af84dcf07cd0edebf601529c05c59e" Mar 18 10:08:12.822418 master-0 kubenswrapper[8244]: I0318 10:08:12.822357 8244 scope.go:117] "RemoveContainer" containerID="03c3238566614a72d16f19efa6573730668f43fd6aaa0c99dec1d35ce1b607ad" Mar 18 10:08:12.822913 master-0 kubenswrapper[8244]: E0318 10:08:12.822788 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-lnq7l_openshift-machine-api(1084562a-20a0-432d-b739-90bc0a4daff2)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" podUID="1084562a-20a0-432d-b739-90bc0a4daff2" Mar 18 10:08:12.974033 master-0 kubenswrapper[8244]: E0318 10:08:12.973618 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 18 10:08:13.427004 master-0 kubenswrapper[8244]: I0318 10:08:13.426925 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:13.427004 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:13.427004 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:13.427004 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:13.428176 master-0 kubenswrapper[8244]: I0318 10:08:13.427032 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:13.832422 master-0 kubenswrapper[8244]: I0318 10:08:13.832357 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/1.log" Mar 18 10:08:14.402168 master-0 kubenswrapper[8244]: I0318 10:08:14.402074 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:14.402168 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:14.402168 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:14.402168 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:14.402920 master-0 kubenswrapper[8244]: I0318 10:08:14.402183 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:15.401563 master-0 kubenswrapper[8244]: I0318 10:08:15.401513 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:15.401563 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:15.401563 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:15.401563 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:15.402930 master-0 kubenswrapper[8244]: I0318 10:08:15.402876 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:15.815378 master-0 kubenswrapper[8244]: I0318 10:08:15.815249 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:08:15.815648 master-0 kubenswrapper[8244]: I0318 10:08:15.815373 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:15.968155 master-0 kubenswrapper[8244]: I0318 10:08:15.968036 8244 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:08:15.968155 master-0 kubenswrapper[8244]: I0318 10:08:15.968131 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:15.968527 master-0 kubenswrapper[8244]: I0318 10:08:15.968057 8244 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:08:15.968527 master-0 kubenswrapper[8244]: I0318 10:08:15.968230 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:16.401539 master-0 kubenswrapper[8244]: I0318 10:08:16.401448 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:16.401539 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:16.401539 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:16.401539 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:16.402594 master-0 kubenswrapper[8244]: I0318 10:08:16.401694 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:16.733608 master-0 kubenswrapper[8244]: I0318 10:08:16.733439 8244 scope.go:117] "RemoveContainer" containerID="36236a2564cf668e8cea6a27fa0d29c4d06205c458f2212a8b31579a80f6f1ed" Mar 18 10:08:17.004041 master-0 kubenswrapper[8244]: I0318 10:08:17.003882 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/2.log" Mar 18 10:08:17.004041 master-0 kubenswrapper[8244]: I0318 10:08:17.003977 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" event={"ID":"932a70df-3afe-4873-9449-ab6e061d3fe3","Type":"ContainerStarted","Data":"a4a231c549055fa855added61a1a04bcb99c420a8c29b8d952b99e6ee3109585"} Mar 18 10:08:17.332685 master-0 kubenswrapper[8244]: E0318 10:08:17.332584 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:17.332685 master-0 kubenswrapper[8244]: E0318 10:08:17.332651 8244 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 10:08:17.402807 master-0 kubenswrapper[8244]: I0318 10:08:17.402692 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:17.402807 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:17.402807 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:17.402807 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:17.403783 master-0 kubenswrapper[8244]: I0318 10:08:17.402814 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:18.401998 master-0 kubenswrapper[8244]: I0318 10:08:18.401870 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:18.401998 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:18.401998 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:18.401998 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:18.401998 master-0 kubenswrapper[8244]: I0318 10:08:18.401942 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:19.402374 master-0 kubenswrapper[8244]: I0318 10:08:19.402262 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:19.402374 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:19.402374 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:19.402374 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:19.402374 master-0 kubenswrapper[8244]: I0318 10:08:19.402368 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:20.402309 master-0 kubenswrapper[8244]: I0318 10:08:20.402193 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:20.402309 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:20.402309 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:20.402309 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:20.403393 master-0 kubenswrapper[8244]: I0318 10:08:20.402314 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:21.401190 master-0 kubenswrapper[8244]: I0318 10:08:21.401065 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:21.401190 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:21.401190 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:21.401190 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:21.401190 master-0 kubenswrapper[8244]: I0318 10:08:21.401156 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:21.734799 master-0 kubenswrapper[8244]: I0318 10:08:21.734299 8244 scope.go:117] "RemoveContainer" containerID="ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c" Mar 18 10:08:21.735606 master-0 kubenswrapper[8244]: E0318 10:08:21.734780 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-kr5kz_openshift-ingress-operator(accc57fb-75f5-4f89-9804-6ede7f77e27c)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" podUID="accc57fb-75f5-4f89-9804-6ede7f77e27c" Mar 18 10:08:22.401929 master-0 kubenswrapper[8244]: I0318 10:08:22.401806 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:22.401929 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:22.401929 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:22.401929 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:22.401929 master-0 kubenswrapper[8244]: I0318 10:08:22.401925 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:23.401593 master-0 kubenswrapper[8244]: I0318 10:08:23.401494 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:23.401593 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:23.401593 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:23.401593 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:23.402339 master-0 kubenswrapper[8244]: I0318 10:08:23.401598 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:23.581691 master-0 kubenswrapper[8244]: E0318 10:08:23.581620 8244 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 10:08:24.059953 master-0 kubenswrapper[8244]: I0318 10:08:24.059883 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:08:24.059953 master-0 kubenswrapper[8244]: I0318 10:08:24.059934 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:08:24.402303 master-0 kubenswrapper[8244]: I0318 10:08:24.402146 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:24.402303 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:24.402303 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:24.402303 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:24.402303 master-0 kubenswrapper[8244]: I0318 10:08:24.402247 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:25.402589 master-0 kubenswrapper[8244]: I0318 10:08:25.402517 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:25.402589 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:25.402589 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:25.402589 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:25.403689 master-0 kubenswrapper[8244]: I0318 10:08:25.402608 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:25.443110 master-0 kubenswrapper[8244]: I0318 10:08:25.443022 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:59340->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 18 10:08:25.443110 master-0 kubenswrapper[8244]: I0318 10:08:25.443101 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:59340->127.0.0.1:10357: read: connection reset by peer" Mar 18 10:08:25.443517 master-0 kubenswrapper[8244]: I0318 10:08:25.443176 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:08:25.445217 master-0 kubenswrapper[8244]: I0318 10:08:25.444747 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"34909282c33ce536a4d9c6eacbb108eac29a41b88e2973ba68855234c3ed4ad6"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 10:08:25.445217 master-0 kubenswrapper[8244]: I0318 10:08:25.444937 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" containerID="cri-o://34909282c33ce536a4d9c6eacbb108eac29a41b88e2973ba68855234c3ed4ad6" gracePeriod=30 Mar 18 10:08:25.736182 master-0 kubenswrapper[8244]: I0318 10:08:25.736103 8244 scope.go:117] "RemoveContainer" containerID="03c3238566614a72d16f19efa6573730668f43fd6aaa0c99dec1d35ce1b607ad" Mar 18 10:08:25.790167 master-0 kubenswrapper[8244]: I0318 10:08:25.790111 8244 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:08:25.790317 master-0 kubenswrapper[8244]: I0318 10:08:25.790181 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:25.790459 master-0 kubenswrapper[8244]: I0318 10:08:25.790413 8244 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:08:25.790645 master-0 kubenswrapper[8244]: I0318 10:08:25.790604 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:26.080077 master-0 kubenswrapper[8244]: I0318 10:08:26.079967 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/2.log" Mar 18 10:08:26.081058 master-0 kubenswrapper[8244]: I0318 10:08:26.080978 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/1.log" Mar 18 10:08:26.083044 master-0 kubenswrapper[8244]: I0318 10:08:26.082988 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:08:26.083186 master-0 kubenswrapper[8244]: I0318 10:08:26.083053 8244 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="34909282c33ce536a4d9c6eacbb108eac29a41b88e2973ba68855234c3ed4ad6" exitCode=255 Mar 18 10:08:26.083186 master-0 kubenswrapper[8244]: I0318 10:08:26.083143 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerDied","Data":"34909282c33ce536a4d9c6eacbb108eac29a41b88e2973ba68855234c3ed4ad6"} Mar 18 10:08:26.083330 master-0 kubenswrapper[8244]: I0318 10:08:26.083189 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90"} Mar 18 10:08:26.083330 master-0 kubenswrapper[8244]: I0318 10:08:26.083219 8244 scope.go:117] "RemoveContainer" containerID="b81d2972d1f40f8e145d9c3461f6c024efa3418434dfbe9ad3720ec95f64f5a9" Mar 18 10:08:26.087228 master-0 kubenswrapper[8244]: I0318 10:08:26.087188 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/1.log" Mar 18 10:08:26.087698 master-0 kubenswrapper[8244]: I0318 10:08:26.087646 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" event={"ID":"1084562a-20a0-432d-b739-90bc0a4daff2","Type":"ContainerStarted","Data":"c0b6e3b46ac87b79d91e8ba9d05e392b0a7e135e1b0676e08c471b66babdb7f6"} Mar 18 10:08:26.401571 master-0 kubenswrapper[8244]: I0318 10:08:26.401482 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:26.401571 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:26.401571 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:26.401571 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:26.402053 master-0 kubenswrapper[8244]: I0318 10:08:26.401592 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:27.101504 master-0 kubenswrapper[8244]: I0318 10:08:27.101454 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/2.log" Mar 18 10:08:27.106124 master-0 kubenswrapper[8244]: I0318 10:08:27.106056 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:08:27.402360 master-0 kubenswrapper[8244]: I0318 10:08:27.402176 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:27.402360 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:27.402360 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:27.402360 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:27.402360 master-0 kubenswrapper[8244]: I0318 10:08:27.402281 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:28.401743 master-0 kubenswrapper[8244]: I0318 10:08:28.401687 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:28.401743 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:28.401743 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:28.401743 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:28.402329 master-0 kubenswrapper[8244]: I0318 10:08:28.401756 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:29.402542 master-0 kubenswrapper[8244]: I0318 10:08:29.402387 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:29.402542 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:29.402542 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:29.402542 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:29.402542 master-0 kubenswrapper[8244]: I0318 10:08:29.402529 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:29.975640 master-0 kubenswrapper[8244]: E0318 10:08:29.975498 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 10:08:30.402017 master-0 kubenswrapper[8244]: I0318 10:08:30.401923 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:30.402017 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:30.402017 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:30.402017 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:30.402439 master-0 kubenswrapper[8244]: I0318 10:08:30.402021 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:31.402487 master-0 kubenswrapper[8244]: I0318 10:08:31.402349 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:31.402487 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:31.402487 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:31.402487 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:31.402487 master-0 kubenswrapper[8244]: I0318 10:08:31.402443 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:32.401780 master-0 kubenswrapper[8244]: I0318 10:08:32.401665 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:32.401780 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:32.401780 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:32.401780 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:32.402594 master-0 kubenswrapper[8244]: I0318 10:08:32.401762 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:32.812544 master-0 kubenswrapper[8244]: I0318 10:08:32.812461 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:08:32.812544 master-0 kubenswrapper[8244]: I0318 10:08:32.812530 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:08:33.401675 master-0 kubenswrapper[8244]: I0318 10:08:33.401572 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:33.401675 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:33.401675 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:33.401675 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:33.401675 master-0 kubenswrapper[8244]: I0318 10:08:33.401664 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:34.401999 master-0 kubenswrapper[8244]: I0318 10:08:34.401887 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:34.401999 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:34.401999 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:34.401999 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:34.401999 master-0 kubenswrapper[8244]: I0318 10:08:34.401982 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:34.794378 master-0 kubenswrapper[8244]: I0318 10:08:34.794306 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:08:35.402130 master-0 kubenswrapper[8244]: I0318 10:08:35.402004 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:35.402130 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:35.402130 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:35.402130 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:35.402130 master-0 kubenswrapper[8244]: I0318 10:08:35.402107 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:35.813214 master-0 kubenswrapper[8244]: I0318 10:08:35.813111 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:08:35.813472 master-0 kubenswrapper[8244]: I0318 10:08:35.813210 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:36.402091 master-0 kubenswrapper[8244]: I0318 10:08:36.401993 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:36.402091 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:36.402091 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:36.402091 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:36.402091 master-0 kubenswrapper[8244]: I0318 10:08:36.402075 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:36.733203 master-0 kubenswrapper[8244]: I0318 10:08:36.733029 8244 scope.go:117] "RemoveContainer" containerID="ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c" Mar 18 10:08:37.203949 master-0 kubenswrapper[8244]: I0318 10:08:37.203542 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/4.log" Mar 18 10:08:37.204768 master-0 kubenswrapper[8244]: I0318 10:08:37.204673 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" event={"ID":"accc57fb-75f5-4f89-9804-6ede7f77e27c","Type":"ContainerStarted","Data":"41a39b28ae41b65ea5a9795330d703cee582bc26a84cf17d231d6cd3c1ceeef2"} Mar 18 10:08:37.401970 master-0 kubenswrapper[8244]: I0318 10:08:37.401872 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:37.401970 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:37.401970 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:37.401970 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:37.401970 master-0 kubenswrapper[8244]: I0318 10:08:37.401944 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:38.401925 master-0 kubenswrapper[8244]: I0318 10:08:38.401781 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:38.401925 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:38.401925 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:38.401925 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:38.402544 master-0 kubenswrapper[8244]: I0318 10:08:38.401954 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:39.401936 master-0 kubenswrapper[8244]: I0318 10:08:39.401875 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:39.401936 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:39.401936 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:39.401936 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:39.402479 master-0 kubenswrapper[8244]: I0318 10:08:39.402439 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:40.402310 master-0 kubenswrapper[8244]: I0318 10:08:40.402243 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:40.402310 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:40.402310 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:40.402310 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:40.402892 master-0 kubenswrapper[8244]: I0318 10:08:40.402314 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:41.402078 master-0 kubenswrapper[8244]: I0318 10:08:41.401995 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:41.402078 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:41.402078 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:41.402078 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:41.403309 master-0 kubenswrapper[8244]: I0318 10:08:41.402086 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:42.401322 master-0 kubenswrapper[8244]: I0318 10:08:42.401255 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:42.401322 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:42.401322 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:42.401322 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:42.401322 master-0 kubenswrapper[8244]: I0318 10:08:42.401321 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:43.403289 master-0 kubenswrapper[8244]: I0318 10:08:43.403176 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:43.403289 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:43.403289 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:43.403289 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:43.403289 master-0 kubenswrapper[8244]: I0318 10:08:43.403259 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:44.401470 master-0 kubenswrapper[8244]: I0318 10:08:44.401392 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:08:44.401470 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:08:44.401470 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:08:44.401470 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:08:44.402197 master-0 kubenswrapper[8244]: I0318 10:08:44.401495 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:08:44.402197 master-0 kubenswrapper[8244]: I0318 10:08:44.401579 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:08:44.402636 master-0 kubenswrapper[8244]: I0318 10:08:44.402568 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"027c606848ee1832749ed6e321be439a9482e3f79b6245a43fee2d25af9358b6"} pod="openshift-ingress/router-default-7dcf5569b5-82tbk" containerMessage="Container router failed startup probe, will be restarted" Mar 18 10:08:44.402754 master-0 kubenswrapper[8244]: I0318 10:08:44.402662 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" containerID="cri-o://027c606848ee1832749ed6e321be439a9482e3f79b6245a43fee2d25af9358b6" gracePeriod=3600 Mar 18 10:08:45.813234 master-0 kubenswrapper[8244]: I0318 10:08:45.813116 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:08:45.813234 master-0 kubenswrapper[8244]: I0318 10:08:45.813208 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:46.373125 master-0 kubenswrapper[8244]: E0318 10:08:46.372892 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-7fl4x.189de700161542f8 openshift-network-node-identity 8603 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-7fl4x,UID:bb942756-bac7-414d-b179-cebdce588a13,APIVersion:v1,ResourceVersion:3316,FieldPath:spec.containers{approver},},Reason:Created,Message:Created container: approver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:56:55 +0000 UTC,LastTimestamp:2026-03-18 10:05:53.920811761 +0000 UTC m=+670.400547919,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 10:08:46.976474 master-0 kubenswrapper[8244]: E0318 10:08:46.976399 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 10:08:47.297065 master-0 kubenswrapper[8244]: I0318 10:08:47.296968 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/3.log" Mar 18 10:08:47.298282 master-0 kubenswrapper[8244]: I0318 10:08:47.298202 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/2.log" Mar 18 10:08:47.298498 master-0 kubenswrapper[8244]: I0318 10:08:47.298293 8244 generic.go:334] "Generic (PLEG): container finished" podID="932a70df-3afe-4873-9449-ab6e061d3fe3" containerID="a4a231c549055fa855added61a1a04bcb99c420a8c29b8d952b99e6ee3109585" exitCode=1 Mar 18 10:08:47.298498 master-0 kubenswrapper[8244]: I0318 10:08:47.298353 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" event={"ID":"932a70df-3afe-4873-9449-ab6e061d3fe3","Type":"ContainerDied","Data":"a4a231c549055fa855added61a1a04bcb99c420a8c29b8d952b99e6ee3109585"} Mar 18 10:08:47.298498 master-0 kubenswrapper[8244]: I0318 10:08:47.298450 8244 scope.go:117] "RemoveContainer" containerID="36236a2564cf668e8cea6a27fa0d29c4d06205c458f2212a8b31579a80f6f1ed" Mar 18 10:08:47.299593 master-0 kubenswrapper[8244]: I0318 10:08:47.299495 8244 scope.go:117] "RemoveContainer" containerID="a4a231c549055fa855added61a1a04bcb99c420a8c29b8d952b99e6ee3109585" Mar 18 10:08:47.300211 master-0 kubenswrapper[8244]: E0318 10:08:47.300139 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-2l6cq_openshift-cluster-storage-operator(932a70df-3afe-4873-9449-ab6e061d3fe3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" podUID="932a70df-3afe-4873-9449-ab6e061d3fe3" Mar 18 10:08:48.309272 master-0 kubenswrapper[8244]: I0318 10:08:48.309189 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/3.log" Mar 18 10:08:55.813977 master-0 kubenswrapper[8244]: I0318 10:08:55.813885 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:08:55.813977 master-0 kubenswrapper[8244]: I0318 10:08:55.813953 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:08:55.814669 master-0 kubenswrapper[8244]: I0318 10:08:55.814005 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:08:55.814669 master-0 kubenswrapper[8244]: I0318 10:08:55.814618 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 10:08:55.814764 master-0 kubenswrapper[8244]: I0318 10:08:55.814688 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" containerID="cri-o://f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90" gracePeriod=30 Mar 18 10:08:55.950594 master-0 kubenswrapper[8244]: E0318 10:08:55.950548 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(af8e875368eec13e995ea08015e08c42)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" Mar 18 10:08:56.373887 master-0 kubenswrapper[8244]: I0318 10:08:56.373797 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/3.log" Mar 18 10:08:56.374514 master-0 kubenswrapper[8244]: I0318 10:08:56.374465 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/2.log" Mar 18 10:08:56.377057 master-0 kubenswrapper[8244]: I0318 10:08:56.377010 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:08:56.377168 master-0 kubenswrapper[8244]: I0318 10:08:56.377077 8244 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90" exitCode=255 Mar 18 10:08:56.377168 master-0 kubenswrapper[8244]: I0318 10:08:56.377119 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerDied","Data":"f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90"} Mar 18 10:08:56.377348 master-0 kubenswrapper[8244]: I0318 10:08:56.377168 8244 scope.go:117] "RemoveContainer" containerID="34909282c33ce536a4d9c6eacbb108eac29a41b88e2973ba68855234c3ed4ad6" Mar 18 10:08:56.378530 master-0 kubenswrapper[8244]: I0318 10:08:56.378484 8244 scope.go:117] "RemoveContainer" containerID="f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90" Mar 18 10:08:56.379137 master-0 kubenswrapper[8244]: E0318 10:08:56.379024 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(af8e875368eec13e995ea08015e08c42)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" Mar 18 10:08:57.384231 master-0 kubenswrapper[8244]: I0318 10:08:57.384185 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/3.log" Mar 18 10:08:57.385803 master-0 kubenswrapper[8244]: I0318 10:08:57.385768 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:08:58.063015 master-0 kubenswrapper[8244]: E0318 10:08:58.062966 8244 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 10:08:59.467169 master-0 kubenswrapper[8244]: I0318 10:08:59.467075 8244 status_manager.go:851] "Failed to get status for pod" podUID="bb942756-bac7-414d-b179-cebdce588a13" pod="openshift-network-node-identity/network-node-identity-7fl4x" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-7fl4x)" Mar 18 10:09:02.733705 master-0 kubenswrapper[8244]: I0318 10:09:02.733646 8244 scope.go:117] "RemoveContainer" containerID="a4a231c549055fa855added61a1a04bcb99c420a8c29b8d952b99e6ee3109585" Mar 18 10:09:02.735633 master-0 kubenswrapper[8244]: E0318 10:09:02.735581 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-2l6cq_openshift-cluster-storage-operator(932a70df-3afe-4873-9449-ab6e061d3fe3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" podUID="932a70df-3afe-4873-9449-ab6e061d3fe3" Mar 18 10:09:02.812427 master-0 kubenswrapper[8244]: I0318 10:09:02.812377 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:09:02.813551 master-0 kubenswrapper[8244]: I0318 10:09:02.813527 8244 scope.go:117] "RemoveContainer" containerID="f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90" Mar 18 10:09:02.814066 master-0 kubenswrapper[8244]: E0318 10:09:02.814033 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(af8e875368eec13e995ea08015e08c42)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" Mar 18 10:09:03.978331 master-0 kubenswrapper[8244]: E0318 10:09:03.978235 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 18 10:09:14.733313 master-0 kubenswrapper[8244]: I0318 10:09:14.733240 8244 scope.go:117] "RemoveContainer" containerID="f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90" Mar 18 10:09:14.734261 master-0 kubenswrapper[8244]: E0318 10:09:14.733651 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(af8e875368eec13e995ea08015e08c42)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" Mar 18 10:09:15.733262 master-0 kubenswrapper[8244]: I0318 10:09:15.733164 8244 scope.go:117] "RemoveContainer" containerID="a4a231c549055fa855added61a1a04bcb99c420a8c29b8d952b99e6ee3109585" Mar 18 10:09:15.733592 master-0 kubenswrapper[8244]: E0318 10:09:15.733540 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-2l6cq_openshift-cluster-storage-operator(932a70df-3afe-4873-9449-ab6e061d3fe3)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" podUID="932a70df-3afe-4873-9449-ab6e061d3fe3" Mar 18 10:09:20.376901 master-0 kubenswrapper[8244]: E0318 10:09:20.376685 8244 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-7fl4x.189de70016ce9ce5 openshift-network-node-identity 8605 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-7fl4x,UID:bb942756-bac7-414d-b179-cebdce588a13,APIVersion:v1,ResourceVersion:3316,FieldPath:spec.containers{approver},},Reason:Started,Message:Started container approver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:56:55 +0000 UTC,LastTimestamp:2026-03-18 10:05:53.93661361 +0000 UTC m=+670.416349778,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 10:09:20.979510 master-0 kubenswrapper[8244]: E0318 10:09:20.979419 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 10:09:26.626007 master-0 kubenswrapper[8244]: I0318 10:09:26.625928 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/2.log" Mar 18 10:09:26.627472 master-0 kubenswrapper[8244]: I0318 10:09:26.627418 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/1.log" Mar 18 10:09:26.628114 master-0 kubenswrapper[8244]: I0318 10:09:26.628054 8244 generic.go:334] "Generic (PLEG): container finished" podID="1084562a-20a0-432d-b739-90bc0a4daff2" containerID="c0b6e3b46ac87b79d91e8ba9d05e392b0a7e135e1b0676e08c471b66babdb7f6" exitCode=1 Mar 18 10:09:26.628248 master-0 kubenswrapper[8244]: I0318 10:09:26.628116 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" event={"ID":"1084562a-20a0-432d-b739-90bc0a4daff2","Type":"ContainerDied","Data":"c0b6e3b46ac87b79d91e8ba9d05e392b0a7e135e1b0676e08c471b66babdb7f6"} Mar 18 10:09:26.628248 master-0 kubenswrapper[8244]: I0318 10:09:26.628165 8244 scope.go:117] "RemoveContainer" containerID="03c3238566614a72d16f19efa6573730668f43fd6aaa0c99dec1d35ce1b607ad" Mar 18 10:09:26.628945 master-0 kubenswrapper[8244]: I0318 10:09:26.628899 8244 scope.go:117] "RemoveContainer" containerID="c0b6e3b46ac87b79d91e8ba9d05e392b0a7e135e1b0676e08c471b66babdb7f6" Mar 18 10:09:26.629556 master-0 kubenswrapper[8244]: E0318 10:09:26.629401 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-lnq7l_openshift-machine-api(1084562a-20a0-432d-b739-90bc0a4daff2)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" podUID="1084562a-20a0-432d-b739-90bc0a4daff2" Mar 18 10:09:27.638591 master-0 kubenswrapper[8244]: I0318 10:09:27.638534 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/2.log" Mar 18 10:09:28.733522 master-0 kubenswrapper[8244]: I0318 10:09:28.733302 8244 scope.go:117] "RemoveContainer" containerID="f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90" Mar 18 10:09:28.734481 master-0 kubenswrapper[8244]: E0318 10:09:28.733710 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(af8e875368eec13e995ea08015e08c42)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" Mar 18 10:09:30.185407 master-0 kubenswrapper[8244]: I0318 10:09:30.185341 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 10:09:30.185966 master-0 kubenswrapper[8244]: I0318 10:09:30.185423 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 10:09:30.668427 master-0 kubenswrapper[8244]: I0318 10:09:30.668230 8244 generic.go:334] "Generic (PLEG): container finished" podID="432f611b-a1a2-4cc9-b005-17a16413d281" containerID="fd996d8153064578e39564038db6d922a85643610cafc41bae9a4fe71acf8389" exitCode=0 Mar 18 10:09:30.668427 master-0 kubenswrapper[8244]: I0318 10:09:30.668359 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" event={"ID":"432f611b-a1a2-4cc9-b005-17a16413d281","Type":"ContainerDied","Data":"fd996d8153064578e39564038db6d922a85643610cafc41bae9a4fe71acf8389"} Mar 18 10:09:30.669777 master-0 kubenswrapper[8244]: I0318 10:09:30.669429 8244 scope.go:117] "RemoveContainer" containerID="fd996d8153064578e39564038db6d922a85643610cafc41bae9a4fe71acf8389" Mar 18 10:09:30.673743 master-0 kubenswrapper[8244]: I0318 10:09:30.672644 8244 generic.go:334] "Generic (PLEG): container finished" podID="5ea90fee-5b5e-4b59-bfc4-969ee8c7912e" containerID="ba2a4b371f548813e64e9936bac5f8a30427b5b6c9ba22e587be7235d007fdc6" exitCode=0 Mar 18 10:09:30.673743 master-0 kubenswrapper[8244]: I0318 10:09:30.672737 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" event={"ID":"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e","Type":"ContainerDied","Data":"ba2a4b371f548813e64e9936bac5f8a30427b5b6c9ba22e587be7235d007fdc6"} Mar 18 10:09:30.675258 master-0 kubenswrapper[8244]: I0318 10:09:30.675185 8244 scope.go:117] "RemoveContainer" containerID="ba2a4b371f548813e64e9936bac5f8a30427b5b6c9ba22e587be7235d007fdc6" Mar 18 10:09:30.677518 master-0 kubenswrapper[8244]: I0318 10:09:30.677359 8244 generic.go:334] "Generic (PLEG): container finished" podID="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" containerID="0e2eb9f88477dff52f2e8f12bdb93c5b6461b1901f2eeb98ccf29a08010685ef" exitCode=0 Mar 18 10:09:30.677518 master-0 kubenswrapper[8244]: I0318 10:09:30.677447 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" event={"ID":"d26036f1-bdce-4ec5-873f-962fa7e8e6c1","Type":"ContainerDied","Data":"0e2eb9f88477dff52f2e8f12bdb93c5b6461b1901f2eeb98ccf29a08010685ef"} Mar 18 10:09:30.677963 master-0 kubenswrapper[8244]: I0318 10:09:30.677519 8244 scope.go:117] "RemoveContainer" containerID="4404e590fec7407faf870aa1aae084da39b8f0b6251730c82fd52357f9b81e01" Mar 18 10:09:30.680129 master-0 kubenswrapper[8244]: I0318 10:09:30.678483 8244 scope.go:117] "RemoveContainer" containerID="0e2eb9f88477dff52f2e8f12bdb93c5b6461b1901f2eeb98ccf29a08010685ef" Mar 18 10:09:30.681261 master-0 kubenswrapper[8244]: I0318 10:09:30.681150 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-s7rm6_c2635254-a491-42e5-b598-461c24bf77ca/cluster-node-tuning-operator/0.log" Mar 18 10:09:30.681364 master-0 kubenswrapper[8244]: I0318 10:09:30.681311 8244 generic.go:334] "Generic (PLEG): container finished" podID="c2635254-a491-42e5-b598-461c24bf77ca" containerID="c59a5fbf874d40b4d6dbdabc263d54ba8033378f9b3eccda436cb84f154d827b" exitCode=1 Mar 18 10:09:30.681522 master-0 kubenswrapper[8244]: I0318 10:09:30.681436 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" event={"ID":"c2635254-a491-42e5-b598-461c24bf77ca","Type":"ContainerDied","Data":"c59a5fbf874d40b4d6dbdabc263d54ba8033378f9b3eccda436cb84f154d827b"} Mar 18 10:09:30.682321 master-0 kubenswrapper[8244]: I0318 10:09:30.682276 8244 scope.go:117] "RemoveContainer" containerID="c59a5fbf874d40b4d6dbdabc263d54ba8033378f9b3eccda436cb84f154d827b" Mar 18 10:09:30.689753 master-0 kubenswrapper[8244]: I0318 10:09:30.686964 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-495pg_0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/openshift-config-operator/1.log" Mar 18 10:09:30.689753 master-0 kubenswrapper[8244]: I0318 10:09:30.687901 8244 generic.go:334] "Generic (PLEG): container finished" podID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerID="28649efad05eac5b0f41333b14d359f00b8f30fb75f4db907f9a07ca5b91b9da" exitCode=0 Mar 18 10:09:30.689753 master-0 kubenswrapper[8244]: I0318 10:09:30.687994 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerDied","Data":"28649efad05eac5b0f41333b14d359f00b8f30fb75f4db907f9a07ca5b91b9da"} Mar 18 10:09:30.689753 master-0 kubenswrapper[8244]: I0318 10:09:30.688610 8244 scope.go:117] "RemoveContainer" containerID="28649efad05eac5b0f41333b14d359f00b8f30fb75f4db907f9a07ca5b91b9da" Mar 18 10:09:30.696473 master-0 kubenswrapper[8244]: I0318 10:09:30.694933 8244 generic.go:334] "Generic (PLEG): container finished" podID="8e812dd9-cd05-4e9e-8710-d0920181ece2" containerID="0f3ba17641fd2eeb6aa8e7525f8b6f8d95a3be2ff7d2acad4eb9670c5982bbeb" exitCode=0 Mar 18 10:09:30.696473 master-0 kubenswrapper[8244]: I0318 10:09:30.695031 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" event={"ID":"8e812dd9-cd05-4e9e-8710-d0920181ece2","Type":"ContainerDied","Data":"0f3ba17641fd2eeb6aa8e7525f8b6f8d95a3be2ff7d2acad4eb9670c5982bbeb"} Mar 18 10:09:30.696473 master-0 kubenswrapper[8244]: I0318 10:09:30.695486 8244 scope.go:117] "RemoveContainer" containerID="0f3ba17641fd2eeb6aa8e7525f8b6f8d95a3be2ff7d2acad4eb9670c5982bbeb" Mar 18 10:09:30.701369 master-0 kubenswrapper[8244]: I0318 10:09:30.700607 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-r8fkv_d4d2218c-f9df-4d43-8727-ed3a920e23f7/package-server-manager/0.log" Mar 18 10:09:30.701926 master-0 kubenswrapper[8244]: I0318 10:09:30.701753 8244 generic.go:334] "Generic (PLEG): container finished" podID="d4d2218c-f9df-4d43-8727-ed3a920e23f7" containerID="2ad786c56f6dcaf1e2cffec16812c116ea52e84ada296839ebfedd3ef5e41741" exitCode=1 Mar 18 10:09:30.702234 master-0 kubenswrapper[8244]: I0318 10:09:30.701952 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" event={"ID":"d4d2218c-f9df-4d43-8727-ed3a920e23f7","Type":"ContainerDied","Data":"2ad786c56f6dcaf1e2cffec16812c116ea52e84ada296839ebfedd3ef5e41741"} Mar 18 10:09:30.702874 master-0 kubenswrapper[8244]: I0318 10:09:30.702790 8244 scope.go:117] "RemoveContainer" containerID="2ad786c56f6dcaf1e2cffec16812c116ea52e84ada296839ebfedd3ef5e41741" Mar 18 10:09:30.705747 master-0 kubenswrapper[8244]: I0318 10:09:30.705704 8244 generic.go:334] "Generic (PLEG): container finished" podID="8ee99294-4785-49d0-b493-0d734cf09396" containerID="9f8d2fc41a698996d2e8d108e6acdc91bab1b3eba85194b567c7b7ad7a300279" exitCode=0 Mar 18 10:09:30.705856 master-0 kubenswrapper[8244]: I0318 10:09:30.705763 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" event={"ID":"8ee99294-4785-49d0-b493-0d734cf09396","Type":"ContainerDied","Data":"9f8d2fc41a698996d2e8d108e6acdc91bab1b3eba85194b567c7b7ad7a300279"} Mar 18 10:09:30.706144 master-0 kubenswrapper[8244]: I0318 10:09:30.706099 8244 scope.go:117] "RemoveContainer" containerID="9f8d2fc41a698996d2e8d108e6acdc91bab1b3eba85194b567c7b7ad7a300279" Mar 18 10:09:30.708960 master-0 kubenswrapper[8244]: I0318 10:09:30.708816 8244 generic.go:334] "Generic (PLEG): container finished" podID="2d014721-ed53-447a-b737-c496bbba18be" containerID="09180a6a9fee68a97b5503198f4ae1ab6d84235d2b7270501ebf779151b55941" exitCode=0 Mar 18 10:09:30.708960 master-0 kubenswrapper[8244]: I0318 10:09:30.708901 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" event={"ID":"2d014721-ed53-447a-b737-c496bbba18be","Type":"ContainerDied","Data":"09180a6a9fee68a97b5503198f4ae1ab6d84235d2b7270501ebf779151b55941"} Mar 18 10:09:30.709483 master-0 kubenswrapper[8244]: I0318 10:09:30.709211 8244 scope.go:117] "RemoveContainer" containerID="09180a6a9fee68a97b5503198f4ae1ab6d84235d2b7270501ebf779151b55941" Mar 18 10:09:30.713771 master-0 kubenswrapper[8244]: I0318 10:09:30.713675 8244 generic.go:334] "Generic (PLEG): container finished" podID="43d54514-989c-4c82-93f9-153b44eacdd1" containerID="027c606848ee1832749ed6e321be439a9482e3f79b6245a43fee2d25af9358b6" exitCode=0 Mar 18 10:09:30.713970 master-0 kubenswrapper[8244]: I0318 10:09:30.713775 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerDied","Data":"027c606848ee1832749ed6e321be439a9482e3f79b6245a43fee2d25af9358b6"} Mar 18 10:09:30.734654 master-0 kubenswrapper[8244]: I0318 10:09:30.734058 8244 scope.go:117] "RemoveContainer" containerID="a4a231c549055fa855added61a1a04bcb99c420a8c29b8d952b99e6ee3109585" Mar 18 10:09:30.737053 master-0 kubenswrapper[8244]: I0318 10:09:30.737008 8244 scope.go:117] "RemoveContainer" containerID="7d07e8c06ddf9d3c29ebaf294b7a205901752e302793187eb4f8dcbb44b41fc0" Mar 18 10:09:30.814723 master-0 kubenswrapper[8244]: I0318 10:09:30.814677 8244 scope.go:117] "RemoveContainer" containerID="49d021e4bb5a3483651e863b5f33517771b81ab9615ea08cc7bd4cae097b1d2d" Mar 18 10:09:31.399650 master-0 kubenswrapper[8244]: I0318 10:09:31.399592 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:09:31.402624 master-0 kubenswrapper[8244]: I0318 10:09:31.402577 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:31.402624 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:31.402624 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:31.402624 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:31.402837 master-0 kubenswrapper[8244]: I0318 10:09:31.402658 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:31.688846 master-0 kubenswrapper[8244]: I0318 10:09:31.687166 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:09:31.749849 master-0 kubenswrapper[8244]: I0318 10:09:31.743234 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/1.log" Mar 18 10:09:31.749849 master-0 kubenswrapper[8244]: I0318 10:09:31.743281 8244 generic.go:334] "Generic (PLEG): container finished" podID="a078565a-6970-4f42-84f4-938f1d637245" containerID="53e820dc65799d326622907d56bfabcb65416af56a015afddd831825233f23fe" exitCode=0 Mar 18 10:09:31.749849 master-0 kubenswrapper[8244]: I0318 10:09:31.743342 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" event={"ID":"a078565a-6970-4f42-84f4-938f1d637245","Type":"ContainerDied","Data":"53e820dc65799d326622907d56bfabcb65416af56a015afddd831825233f23fe"} Mar 18 10:09:31.749849 master-0 kubenswrapper[8244]: I0318 10:09:31.743372 8244 scope.go:117] "RemoveContainer" containerID="ff998e161f24e27e62ffb41d5f1af2c4149f9709b9260bb197fe3f8937665152" Mar 18 10:09:31.749849 master-0 kubenswrapper[8244]: I0318 10:09:31.743684 8244 scope.go:117] "RemoveContainer" containerID="53e820dc65799d326622907d56bfabcb65416af56a015afddd831825233f23fe" Mar 18 10:09:31.765074 master-0 kubenswrapper[8244]: I0318 10:09:31.765018 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" event={"ID":"8e812dd9-cd05-4e9e-8710-d0920181ece2","Type":"ContainerStarted","Data":"85179fa4ce87a55c1d593899b1c88f0c3d53bc8a87c2f8f645687611ae213372"} Mar 18 10:09:31.780963 master-0 kubenswrapper[8244]: I0318 10:09:31.778237 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" event={"ID":"2d014721-ed53-447a-b737-c496bbba18be","Type":"ContainerStarted","Data":"0f269a6b74921af255c6f5df422cba4572f190b61c4ef57c4e541490115ce0ef"} Mar 18 10:09:31.785266 master-0 kubenswrapper[8244]: I0318 10:09:31.781212 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-s7rm6_c2635254-a491-42e5-b598-461c24bf77ca/cluster-node-tuning-operator/0.log" Mar 18 10:09:31.785266 master-0 kubenswrapper[8244]: I0318 10:09:31.781288 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" event={"ID":"c2635254-a491-42e5-b598-461c24bf77ca","Type":"ContainerStarted","Data":"1531bc8df108d1e08ae1dd3ef4e75462f776871b6ec36a98bf96e7f826781b7b"} Mar 18 10:09:31.806982 master-0 kubenswrapper[8244]: I0318 10:09:31.806347 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerStarted","Data":"6ed6678817d1dbeb82e03a25e183f0798cbf1dafc08404b095ad2e689d372212"} Mar 18 10:09:31.806982 master-0 kubenswrapper[8244]: I0318 10:09:31.806952 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:09:31.818909 master-0 kubenswrapper[8244]: I0318 10:09:31.816744 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" event={"ID":"432f611b-a1a2-4cc9-b005-17a16413d281","Type":"ContainerStarted","Data":"d689a67b5e9c8ea2ac68304523b7338171021e1e1abf69b76e814f08f21797ee"} Mar 18 10:09:31.820691 master-0 kubenswrapper[8244]: I0318 10:09:31.820477 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-r8fkv_d4d2218c-f9df-4d43-8727-ed3a920e23f7/package-server-manager/0.log" Mar 18 10:09:31.821041 master-0 kubenswrapper[8244]: I0318 10:09:31.820997 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" event={"ID":"d4d2218c-f9df-4d43-8727-ed3a920e23f7","Type":"ContainerStarted","Data":"3ece79f0b06ab7aad02470992b2e6d888d2eb265026d860018f7d1b6cf72700b"} Mar 18 10:09:31.821539 master-0 kubenswrapper[8244]: I0318 10:09:31.821510 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 10:09:31.824378 master-0 kubenswrapper[8244]: I0318 10:09:31.824340 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/3.log" Mar 18 10:09:31.824479 master-0 kubenswrapper[8244]: I0318 10:09:31.824444 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" event={"ID":"932a70df-3afe-4873-9449-ab6e061d3fe3","Type":"ContainerStarted","Data":"428002058e2cc7469b36c5217491ddda6e0c844530c7cafc91c83e6d4e43957b"} Mar 18 10:09:31.826738 master-0 kubenswrapper[8244]: I0318 10:09:31.826713 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" event={"ID":"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e","Type":"ContainerStarted","Data":"1c05dcb126f2c0c5c89e4aee8d476e17bb028016176a4f93d5c824d1fa99257e"} Mar 18 10:09:31.873910 master-0 kubenswrapper[8244]: I0318 10:09:31.853933 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" event={"ID":"d26036f1-bdce-4ec5-873f-962fa7e8e6c1","Type":"ContainerStarted","Data":"277c335776bc47c6d20604bf19ecc2e8980475a2e67060ff487483dccf1008e2"} Mar 18 10:09:31.873910 master-0 kubenswrapper[8244]: I0318 10:09:31.867064 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" event={"ID":"8ee99294-4785-49d0-b493-0d734cf09396","Type":"ContainerStarted","Data":"0187115dd15ebdb1a895ec864b4017d55b75bc1603b02634810e08f87b3cb81a"} Mar 18 10:09:31.873910 master-0 kubenswrapper[8244]: I0318 10:09:31.869313 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" event={"ID":"43d54514-989c-4c82-93f9-153b44eacdd1","Type":"ContainerStarted","Data":"a793ad14e14427748d4c1657255fe30b1148ea7411a23dfb7ee285b722042b3c"} Mar 18 10:09:32.401657 master-0 kubenswrapper[8244]: I0318 10:09:32.401550 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:32.401657 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:32.401657 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:32.401657 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:32.403059 master-0 kubenswrapper[8244]: I0318 10:09:32.401653 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:32.883716 master-0 kubenswrapper[8244]: I0318 10:09:32.883636 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" event={"ID":"a078565a-6970-4f42-84f4-938f1d637245","Type":"ContainerStarted","Data":"dc37c46672629320763a83c0799ddbc2c6f96a0e3bdececc30fe6b15161e37c7"} Mar 18 10:09:33.399754 master-0 kubenswrapper[8244]: I0318 10:09:33.399679 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:09:33.401847 master-0 kubenswrapper[8244]: I0318 10:09:33.401776 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:33.401847 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:33.401847 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:33.401847 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:33.402644 master-0 kubenswrapper[8244]: I0318 10:09:33.401842 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:33.480566 master-0 kubenswrapper[8244]: I0318 10:09:33.480496 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 10:09:33.484596 master-0 kubenswrapper[8244]: I0318 10:09:33.484542 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 10:09:33.747775 master-0 kubenswrapper[8244]: I0318 10:09:33.747595 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2610d88e-f450-455a-9db5-dc59c1d97bf4" path="/var/lib/kubelet/pods/2610d88e-f450-455a-9db5-dc59c1d97bf4/volumes" Mar 18 10:09:34.402169 master-0 kubenswrapper[8244]: I0318 10:09:34.402067 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:34.402169 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:34.402169 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:34.402169 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:34.402169 master-0 kubenswrapper[8244]: I0318 10:09:34.402155 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:35.401463 master-0 kubenswrapper[8244]: I0318 10:09:35.401354 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:35.401463 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:35.401463 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:35.401463 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:35.401981 master-0 kubenswrapper[8244]: I0318 10:09:35.401474 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:35.688292 master-0 kubenswrapper[8244]: I0318 10:09:35.688201 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:09:35.689255 master-0 kubenswrapper[8244]: I0318 10:09:35.689205 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:09:35.911533 master-0 kubenswrapper[8244]: I0318 10:09:35.911450 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-pgtbr_bb35841e-d992-4044-aaaa-06c9faf47bd0/service-ca-operator/1.log" Mar 18 10:09:35.911533 master-0 kubenswrapper[8244]: I0318 10:09:35.911532 8244 generic.go:334] "Generic (PLEG): container finished" podID="bb35841e-d992-4044-aaaa-06c9faf47bd0" containerID="d49c249df3f862614187a3b820449471cb0684b53fb2bc542b281bed1f3be2fd" exitCode=0 Mar 18 10:09:35.911944 master-0 kubenswrapper[8244]: I0318 10:09:35.911576 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" event={"ID":"bb35841e-d992-4044-aaaa-06c9faf47bd0","Type":"ContainerDied","Data":"d49c249df3f862614187a3b820449471cb0684b53fb2bc542b281bed1f3be2fd"} Mar 18 10:09:35.911944 master-0 kubenswrapper[8244]: I0318 10:09:35.911623 8244 scope.go:117] "RemoveContainer" containerID="76f59e21155c1d71669d55451f86d8b5a3fe790b476c844c6bc57c22a2e68f76" Mar 18 10:09:35.913000 master-0 kubenswrapper[8244]: I0318 10:09:35.912891 8244 scope.go:117] "RemoveContainer" containerID="d49c249df3f862614187a3b820449471cb0684b53fb2bc542b281bed1f3be2fd" Mar 18 10:09:36.402010 master-0 kubenswrapper[8244]: I0318 10:09:36.401926 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:36.402010 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:36.402010 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:36.402010 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:36.402010 master-0 kubenswrapper[8244]: I0318 10:09:36.402004 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:36.923272 master-0 kubenswrapper[8244]: I0318 10:09:36.923178 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" event={"ID":"bb35841e-d992-4044-aaaa-06c9faf47bd0","Type":"ContainerStarted","Data":"4834fc68695b83fc01b86ede2280ff7bcf0ec2741aa9a413b978d5df3b88c306"} Mar 18 10:09:37.186128 master-0 kubenswrapper[8244]: I0318 10:09:37.185976 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:09:37.186128 master-0 kubenswrapper[8244]: I0318 10:09:37.186043 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:09:37.402017 master-0 kubenswrapper[8244]: I0318 10:09:37.401939 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:37.402017 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:37.402017 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:37.402017 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:37.402431 master-0 kubenswrapper[8244]: I0318 10:09:37.402052 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:37.980941 master-0 kubenswrapper[8244]: E0318 10:09:37.980776 8244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 10:09:38.402290 master-0 kubenswrapper[8244]: I0318 10:09:38.402198 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:38.402290 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:38.402290 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:38.402290 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:38.402730 master-0 kubenswrapper[8244]: I0318 10:09:38.402287 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:38.688478 master-0 kubenswrapper[8244]: I0318 10:09:38.688277 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:09:38.688973 master-0 kubenswrapper[8244]: I0318 10:09:38.688908 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:09:38.977942 master-0 kubenswrapper[8244]: E0318 10:09:38.977708 8244 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:09:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:09:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:09:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T10:09:28Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 10:09:39.401700 master-0 kubenswrapper[8244]: I0318 10:09:39.401629 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:39.401700 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:39.401700 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:39.401700 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:39.402611 master-0 kubenswrapper[8244]: I0318 10:09:39.401739 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:39.733964 master-0 kubenswrapper[8244]: I0318 10:09:39.733799 8244 scope.go:117] "RemoveContainer" containerID="f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90" Mar 18 10:09:40.185320 master-0 kubenswrapper[8244]: I0318 10:09:40.185219 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:09:40.185548 master-0 kubenswrapper[8244]: I0318 10:09:40.185317 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:09:40.402769 master-0 kubenswrapper[8244]: I0318 10:09:40.402673 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:40.402769 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:40.402769 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:40.402769 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:40.403907 master-0 kubenswrapper[8244]: I0318 10:09:40.402773 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:40.733562 master-0 kubenswrapper[8244]: I0318 10:09:40.733388 8244 scope.go:117] "RemoveContainer" containerID="c0b6e3b46ac87b79d91e8ba9d05e392b0a7e135e1b0676e08c471b66babdb7f6" Mar 18 10:09:40.733957 master-0 kubenswrapper[8244]: E0318 10:09:40.733903 8244 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-lnq7l_openshift-machine-api(1084562a-20a0-432d-b739-90bc0a4daff2)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" podUID="1084562a-20a0-432d-b739-90bc0a4daff2" Mar 18 10:09:40.961484 master-0 kubenswrapper[8244]: I0318 10:09:40.961410 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/3.log" Mar 18 10:09:40.963960 master-0 kubenswrapper[8244]: I0318 10:09:40.963902 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:09:40.964118 master-0 kubenswrapper[8244]: I0318 10:09:40.963997 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"af8e875368eec13e995ea08015e08c42","Type":"ContainerStarted","Data":"7ca73c96270bb01e4b2a501f5fca8a82d6d3109e114172103ea987822829d77c"} Mar 18 10:09:41.401968 master-0 kubenswrapper[8244]: I0318 10:09:41.401882 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:41.401968 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:41.401968 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:41.401968 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:41.402401 master-0 kubenswrapper[8244]: I0318 10:09:41.402004 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:41.688866 master-0 kubenswrapper[8244]: I0318 10:09:41.688723 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:09:41.689875 master-0 kubenswrapper[8244]: I0318 10:09:41.688886 8244 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:09:41.689875 master-0 kubenswrapper[8244]: I0318 10:09:41.688954 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:09:41.689875 master-0 kubenswrapper[8244]: I0318 10:09:41.689648 8244 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"6ed6678817d1dbeb82e03a25e183f0798cbf1dafc08404b095ad2e689d372212"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 10:09:41.689875 master-0 kubenswrapper[8244]: I0318 10:09:41.689686 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" containerID="cri-o://6ed6678817d1dbeb82e03a25e183f0798cbf1dafc08404b095ad2e689d372212" gracePeriod=30 Mar 18 10:09:41.704903 master-0 kubenswrapper[8244]: I0318 10:09:41.704790 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": read tcp 10.128.0.2:33446->10.128.0.18:8443: read: connection reset by peer" start-of-body= Mar 18 10:09:41.705088 master-0 kubenswrapper[8244]: I0318 10:09:41.704923 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": read tcp 10.128.0.2:33446->10.128.0.18:8443: read: connection reset by peer" Mar 18 10:09:41.975511 master-0 kubenswrapper[8244]: I0318 10:09:41.975481 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-495pg_0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/openshift-config-operator/3.log" Mar 18 10:09:41.976479 master-0 kubenswrapper[8244]: I0318 10:09:41.976455 8244 generic.go:334] "Generic (PLEG): container finished" podID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerID="6ed6678817d1dbeb82e03a25e183f0798cbf1dafc08404b095ad2e689d372212" exitCode=255 Mar 18 10:09:41.976589 master-0 kubenswrapper[8244]: I0318 10:09:41.976528 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerDied","Data":"6ed6678817d1dbeb82e03a25e183f0798cbf1dafc08404b095ad2e689d372212"} Mar 18 10:09:41.976692 master-0 kubenswrapper[8244]: I0318 10:09:41.976678 8244 scope.go:117] "RemoveContainer" containerID="28649efad05eac5b0f41333b14d359f00b8f30fb75f4db907f9a07ca5b91b9da" Mar 18 10:09:42.401697 master-0 kubenswrapper[8244]: I0318 10:09:42.401625 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:42.401697 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:42.401697 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:42.401697 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:42.401959 master-0 kubenswrapper[8244]: I0318 10:09:42.401704 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:42.812749 master-0 kubenswrapper[8244]: I0318 10:09:42.812624 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:09:42.812749 master-0 kubenswrapper[8244]: I0318 10:09:42.812710 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:09:42.986461 master-0 kubenswrapper[8244]: I0318 10:09:42.986368 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-495pg_0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/openshift-config-operator/3.log" Mar 18 10:09:42.987001 master-0 kubenswrapper[8244]: I0318 10:09:42.986941 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" event={"ID":"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480","Type":"ContainerStarted","Data":"aa47c9755535d324b195d5291d5e6880dd799ecb9b7a14d4179b0e646fc495b7"} Mar 18 10:09:43.186142 master-0 kubenswrapper[8244]: I0318 10:09:43.185938 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:09:43.186142 master-0 kubenswrapper[8244]: I0318 10:09:43.186028 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:09:43.186142 master-0 kubenswrapper[8244]: I0318 10:09:43.186135 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:09:43.402781 master-0 kubenswrapper[8244]: I0318 10:09:43.402677 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:43.402781 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:43.402781 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:43.402781 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:43.402781 master-0 kubenswrapper[8244]: I0318 10:09:43.402769 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:44.402558 master-0 kubenswrapper[8244]: I0318 10:09:44.402475 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:44.402558 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:44.402558 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:44.402558 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:44.403563 master-0 kubenswrapper[8244]: I0318 10:09:44.402569 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:45.401703 master-0 kubenswrapper[8244]: I0318 10:09:45.401609 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:45.401703 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:45.401703 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:45.401703 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:45.402270 master-0 kubenswrapper[8244]: I0318 10:09:45.401707 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:45.813540 master-0 kubenswrapper[8244]: I0318 10:09:45.813481 8244 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:09:45.814458 master-0 kubenswrapper[8244]: I0318 10:09:45.814410 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:09:46.007629 master-0 kubenswrapper[8244]: I0318 10:09:46.007388 8244 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-495pg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:09:46.007629 master-0 kubenswrapper[8244]: I0318 10:09:46.007527 8244 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" podUID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:09:46.024348 master-0 kubenswrapper[8244]: I0318 10:09:46.024294 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-retry-1-master-0"] Mar 18 10:09:46.024609 master-0 kubenswrapper[8244]: E0318 10:09:46.024578 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90db95c5-2017-4b04-b11c-9844947c5be9" containerName="installer" Mar 18 10:09:46.024609 master-0 kubenswrapper[8244]: I0318 10:09:46.024593 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="90db95c5-2017-4b04-b11c-9844947c5be9" containerName="installer" Mar 18 10:09:46.024676 master-0 kubenswrapper[8244]: E0318 10:09:46.024619 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea5939e-5f4d-4028-9384-2ec5710ecdc8" containerName="installer" Mar 18 10:09:46.024676 master-0 kubenswrapper[8244]: I0318 10:09:46.024629 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea5939e-5f4d-4028-9384-2ec5710ecdc8" containerName="installer" Mar 18 10:09:46.024676 master-0 kubenswrapper[8244]: E0318 10:09:46.024646 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2610d88e-f450-455a-9db5-dc59c1d97bf4" containerName="installer" Mar 18 10:09:46.024676 master-0 kubenswrapper[8244]: I0318 10:09:46.024654 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="2610d88e-f450-455a-9db5-dc59c1d97bf4" containerName="installer" Mar 18 10:09:46.024676 master-0 kubenswrapper[8244]: E0318 10:09:46.024670 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" containerName="installer" Mar 18 10:09:46.024676 master-0 kubenswrapper[8244]: I0318 10:09:46.024678 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" containerName="installer" Mar 18 10:09:46.025055 master-0 kubenswrapper[8244]: E0318 10:09:46.024693 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a8662e-66f1-4aee-9344-564bb4ac4f9a" containerName="installer" Mar 18 10:09:46.025055 master-0 kubenswrapper[8244]: I0318 10:09:46.024701 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a8662e-66f1-4aee-9344-564bb4ac4f9a" containerName="installer" Mar 18 10:09:46.025306 master-0 kubenswrapper[8244]: I0318 10:09:46.025269 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" containerName="installer" Mar 18 10:09:46.025358 master-0 kubenswrapper[8244]: I0318 10:09:46.025317 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a8662e-66f1-4aee-9344-564bb4ac4f9a" containerName="installer" Mar 18 10:09:46.025358 master-0 kubenswrapper[8244]: I0318 10:09:46.025343 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="2610d88e-f450-455a-9db5-dc59c1d97bf4" containerName="installer" Mar 18 10:09:46.025416 master-0 kubenswrapper[8244]: I0318 10:09:46.025367 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="90db95c5-2017-4b04-b11c-9844947c5be9" containerName="installer" Mar 18 10:09:46.025416 master-0 kubenswrapper[8244]: I0318 10:09:46.025385 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea5939e-5f4d-4028-9384-2ec5710ecdc8" containerName="installer" Mar 18 10:09:46.025910 master-0 kubenswrapper[8244]: I0318 10:09:46.025888 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.029728 master-0 kubenswrapper[8244]: I0318 10:09:46.029225 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-zr9bx" Mar 18 10:09:46.029948 master-0 kubenswrapper[8244]: I0318 10:09:46.029917 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 10:09:46.030408 master-0 kubenswrapper[8244]: I0318 10:09:46.030372 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-retry-1-master-0"] Mar 18 10:09:46.031334 master-0 kubenswrapper[8244]: I0318 10:09:46.031311 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.039642 master-0 kubenswrapper[8244]: I0318 10:09:46.039600 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-76rsr" Mar 18 10:09:46.039982 master-0 kubenswrapper[8244]: I0318 10:09:46.039929 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 10:09:46.092601 master-0 kubenswrapper[8244]: I0318 10:09:46.052940 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-retry-1-master-0"] Mar 18 10:09:46.092601 master-0 kubenswrapper[8244]: I0318 10:09:46.058658 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-retry-1-master-0"] Mar 18 10:09:46.170957 master-0 kubenswrapper[8244]: I0318 10:09:46.168666 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-var-lock\") pod \"installer-6-retry-1-master-0\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.170957 master-0 kubenswrapper[8244]: I0318 10:09:46.168730 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kubelet-dir\") pod \"installer-6-retry-1-master-0\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.170957 master-0 kubenswrapper[8244]: I0318 10:09:46.168770 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.170957 master-0 kubenswrapper[8244]: I0318 10:09:46.168801 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6716938-ca14-4000-b7f1-b60e93e93c0d-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.170957 master-0 kubenswrapper[8244]: I0318 10:09:46.168864 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.170957 master-0 kubenswrapper[8244]: I0318 10:09:46.168899 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kube-api-access\") pod \"installer-6-retry-1-master-0\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.270438 master-0 kubenswrapper[8244]: I0318 10:09:46.270266 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-var-lock\") pod \"installer-6-retry-1-master-0\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.270438 master-0 kubenswrapper[8244]: I0318 10:09:46.270373 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kubelet-dir\") pod \"installer-6-retry-1-master-0\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.270438 master-0 kubenswrapper[8244]: I0318 10:09:46.270432 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.270800 master-0 kubenswrapper[8244]: I0318 10:09:46.270484 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6716938-ca14-4000-b7f1-b60e93e93c0d-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.270800 master-0 kubenswrapper[8244]: I0318 10:09:46.270536 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.270800 master-0 kubenswrapper[8244]: I0318 10:09:46.270574 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kube-api-access\") pod \"installer-6-retry-1-master-0\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.271172 master-0 kubenswrapper[8244]: I0318 10:09:46.271077 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.271263 master-0 kubenswrapper[8244]: I0318 10:09:46.271205 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.271263 master-0 kubenswrapper[8244]: I0318 10:09:46.271212 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kubelet-dir\") pod \"installer-6-retry-1-master-0\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.271520 master-0 kubenswrapper[8244]: I0318 10:09:46.271240 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-var-lock\") pod \"installer-6-retry-1-master-0\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.300313 master-0 kubenswrapper[8244]: I0318 10:09:46.300228 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6716938-ca14-4000-b7f1-b60e93e93c0d-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.302004 master-0 kubenswrapper[8244]: I0318 10:09:46.301940 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kube-api-access\") pod \"installer-6-retry-1-master-0\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.401215 master-0 kubenswrapper[8244]: I0318 10:09:46.401132 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:46.401215 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:46.401215 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:46.401215 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:46.401666 master-0 kubenswrapper[8244]: I0318 10:09:46.401218 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:46.415563 master-0 kubenswrapper[8244]: I0318 10:09:46.415476 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:09:46.441398 master-0 kubenswrapper[8244]: I0318 10:09:46.441307 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:09:46.593728 master-0 kubenswrapper[8244]: I0318 10:09:46.593551 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:09:46.961568 master-0 kubenswrapper[8244]: I0318 10:09:46.961515 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-retry-1-master-0"] Mar 18 10:09:47.029446 master-0 kubenswrapper[8244]: I0318 10:09:47.029400 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-retry-1-master-0"] Mar 18 10:09:47.035431 master-0 kubenswrapper[8244]: I0318 10:09:47.035378 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-lk698_ec53d7fa-445b-4e1d-84ef-545f08e80ccc/kube-storage-version-migrator-operator/1.log" Mar 18 10:09:47.035492 master-0 kubenswrapper[8244]: I0318 10:09:47.035464 8244 generic.go:334] "Generic (PLEG): container finished" podID="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" containerID="ab9a533206bf10cbc0086475add5139b53093ab44226d73893369fd1ba1ed0a0" exitCode=0 Mar 18 10:09:47.035603 master-0 kubenswrapper[8244]: I0318 10:09:47.035555 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" event={"ID":"ec53d7fa-445b-4e1d-84ef-545f08e80ccc","Type":"ContainerDied","Data":"ab9a533206bf10cbc0086475add5139b53093ab44226d73893369fd1ba1ed0a0"} Mar 18 10:09:47.035659 master-0 kubenswrapper[8244]: I0318 10:09:47.035637 8244 scope.go:117] "RemoveContainer" containerID="100b826fb47409f3adda82931968130591dc6b1e7420f5ccfd2ef57c6281504c" Mar 18 10:09:47.036261 master-0 kubenswrapper[8244]: I0318 10:09:47.036222 8244 scope.go:117] "RemoveContainer" containerID="ab9a533206bf10cbc0086475add5139b53093ab44226d73893369fd1ba1ed0a0" Mar 18 10:09:47.041627 master-0 kubenswrapper[8244]: I0318 10:09:47.041581 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-zz68c_0d72e695-0183-4ee8-8add-5425e67f7138/openshift-apiserver-operator/1.log" Mar 18 10:09:47.041710 master-0 kubenswrapper[8244]: I0318 10:09:47.041656 8244 generic.go:334] "Generic (PLEG): container finished" podID="0d72e695-0183-4ee8-8add-5425e67f7138" containerID="7d6fd2e1bc4be1b2a613ed03b0fa77f5671b8e216ea0aab842b063aa213fff8f" exitCode=0 Mar 18 10:09:47.041779 master-0 kubenswrapper[8244]: I0318 10:09:47.041726 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" event={"ID":"0d72e695-0183-4ee8-8add-5425e67f7138","Type":"ContainerDied","Data":"7d6fd2e1bc4be1b2a613ed03b0fa77f5671b8e216ea0aab842b063aa213fff8f"} Mar 18 10:09:47.042249 master-0 kubenswrapper[8244]: I0318 10:09:47.042225 8244 scope.go:117] "RemoveContainer" containerID="7d6fd2e1bc4be1b2a613ed03b0fa77f5671b8e216ea0aab842b063aa213fff8f" Mar 18 10:09:47.044734 master-0 kubenswrapper[8244]: I0318 10:09:47.044667 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-xnvn9_29fbc78b-1887-40d4-8165-f0f7cc40b583/machine-api-operator/0.log" Mar 18 10:09:47.046091 master-0 kubenswrapper[8244]: I0318 10:09:47.045974 8244 generic.go:334] "Generic (PLEG): container finished" podID="29fbc78b-1887-40d4-8165-f0f7cc40b583" containerID="8bc81d8dfdc71ea2b5b45a9af5008e6292938bf340e41102f31bdd98b3d93eaa" exitCode=255 Mar 18 10:09:47.046091 master-0 kubenswrapper[8244]: I0318 10:09:47.046045 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" event={"ID":"29fbc78b-1887-40d4-8165-f0f7cc40b583","Type":"ContainerDied","Data":"8bc81d8dfdc71ea2b5b45a9af5008e6292938bf340e41102f31bdd98b3d93eaa"} Mar 18 10:09:47.046518 master-0 kubenswrapper[8244]: I0318 10:09:47.046422 8244 scope.go:117] "RemoveContainer" containerID="8bc81d8dfdc71ea2b5b45a9af5008e6292938bf340e41102f31bdd98b3d93eaa" Mar 18 10:09:47.048644 master-0 kubenswrapper[8244]: I0318 10:09:47.048436 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-smghb_6a6a616d-012a-479e-ab3d-b21295ea1805/kube-apiserver-operator/1.log" Mar 18 10:09:47.048644 master-0 kubenswrapper[8244]: I0318 10:09:47.048474 8244 generic.go:334] "Generic (PLEG): container finished" podID="6a6a616d-012a-479e-ab3d-b21295ea1805" containerID="1438e5c0b41d2a2cdef9ebed19bce07d60cb299edfd66da1254cb9b0f6f74353" exitCode=0 Mar 18 10:09:47.048644 master-0 kubenswrapper[8244]: I0318 10:09:47.048531 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" event={"ID":"6a6a616d-012a-479e-ab3d-b21295ea1805","Type":"ContainerDied","Data":"1438e5c0b41d2a2cdef9ebed19bce07d60cb299edfd66da1254cb9b0f6f74353"} Mar 18 10:09:47.048897 master-0 kubenswrapper[8244]: I0318 10:09:47.048879 8244 scope.go:117] "RemoveContainer" containerID="1438e5c0b41d2a2cdef9ebed19bce07d60cb299edfd66da1254cb9b0f6f74353" Mar 18 10:09:47.050689 master-0 kubenswrapper[8244]: I0318 10:09:47.050665 8244 generic.go:334] "Generic (PLEG): container finished" podID="29490aed-9c97-42d1-94c8-44d1de13b70c" containerID="7dacdb62f1945b9bcbdc5ee51170fb7ad65d9a415432a7a5c1a8a53dc9179ca2" exitCode=0 Mar 18 10:09:47.050767 master-0 kubenswrapper[8244]: I0318 10:09:47.050723 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" event={"ID":"29490aed-9c97-42d1-94c8-44d1de13b70c","Type":"ContainerDied","Data":"7dacdb62f1945b9bcbdc5ee51170fb7ad65d9a415432a7a5c1a8a53dc9179ca2"} Mar 18 10:09:47.051176 master-0 kubenswrapper[8244]: I0318 10:09:47.051148 8244 scope.go:117] "RemoveContainer" containerID="7dacdb62f1945b9bcbdc5ee51170fb7ad65d9a415432a7a5c1a8a53dc9179ca2" Mar 18 10:09:47.052398 master-0 kubenswrapper[8244]: I0318 10:09:47.052366 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-4q9tr_f076eaf0-b041-4db0-ba06-3d85e23bb654/authentication-operator/1.log" Mar 18 10:09:47.052449 master-0 kubenswrapper[8244]: I0318 10:09:47.052419 8244 generic.go:334] "Generic (PLEG): container finished" podID="f076eaf0-b041-4db0-ba06-3d85e23bb654" containerID="b5df01736cfc47aa85b36fd7020d93ab1a10c4989f7408f5d6725b96384201c0" exitCode=0 Mar 18 10:09:47.052500 master-0 kubenswrapper[8244]: I0318 10:09:47.052481 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" event={"ID":"f076eaf0-b041-4db0-ba06-3d85e23bb654","Type":"ContainerDied","Data":"b5df01736cfc47aa85b36fd7020d93ab1a10c4989f7408f5d6725b96384201c0"} Mar 18 10:09:47.052900 master-0 kubenswrapper[8244]: I0318 10:09:47.052877 8244 scope.go:117] "RemoveContainer" containerID="b5df01736cfc47aa85b36fd7020d93ab1a10c4989f7408f5d6725b96384201c0" Mar 18 10:09:47.054176 master-0 kubenswrapper[8244]: I0318 10:09:47.054141 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-8srnz_9ccdc221-4ec5-487e-8ec4-85284ed628d8/network-operator/1.log" Mar 18 10:09:47.054257 master-0 kubenswrapper[8244]: I0318 10:09:47.054177 8244 generic.go:334] "Generic (PLEG): container finished" podID="9ccdc221-4ec5-487e-8ec4-85284ed628d8" containerID="d104795039a77eee9eb4fddfb0911cce88afaee884dd9159c6ea0d77b9f36476" exitCode=0 Mar 18 10:09:47.054257 master-0 kubenswrapper[8244]: I0318 10:09:47.054197 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" event={"ID":"9ccdc221-4ec5-487e-8ec4-85284ed628d8","Type":"ContainerDied","Data":"d104795039a77eee9eb4fddfb0911cce88afaee884dd9159c6ea0d77b9f36476"} Mar 18 10:09:47.054596 master-0 kubenswrapper[8244]: I0318 10:09:47.054517 8244 scope.go:117] "RemoveContainer" containerID="d104795039a77eee9eb4fddfb0911cce88afaee884dd9159c6ea0d77b9f36476" Mar 18 10:09:47.056904 master-0 kubenswrapper[8244]: I0318 10:09:47.056882 8244 generic.go:334] "Generic (PLEG): container finished" podID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerID="ef56f38c2bc505e5fbc078e115510767e1b06d3c1193709a420591be902fdca8" exitCode=0 Mar 18 10:09:47.056976 master-0 kubenswrapper[8244]: I0318 10:09:47.056936 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" event={"ID":"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d","Type":"ContainerDied","Data":"ef56f38c2bc505e5fbc078e115510767e1b06d3c1193709a420591be902fdca8"} Mar 18 10:09:47.057198 master-0 kubenswrapper[8244]: I0318 10:09:47.057179 8244 scope.go:117] "RemoveContainer" containerID="ef56f38c2bc505e5fbc078e115510767e1b06d3c1193709a420591be902fdca8" Mar 18 10:09:47.065392 master-0 kubenswrapper[8244]: I0318 10:09:47.065360 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-g25jq_3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/openshift-controller-manager-operator/1.log" Mar 18 10:09:47.065469 master-0 kubenswrapper[8244]: I0318 10:09:47.065403 8244 generic.go:334] "Generic (PLEG): container finished" podID="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" containerID="69f2cdbc33296c63e514edbad7b73c69b46a3bfd3f3df3701dfc360a76760a09" exitCode=0 Mar 18 10:09:47.065469 master-0 kubenswrapper[8244]: I0318 10:09:47.065459 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" event={"ID":"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4","Type":"ContainerDied","Data":"69f2cdbc33296c63e514edbad7b73c69b46a3bfd3f3df3701dfc360a76760a09"} Mar 18 10:09:47.065769 master-0 kubenswrapper[8244]: I0318 10:09:47.065735 8244 scope.go:117] "RemoveContainer" containerID="69f2cdbc33296c63e514edbad7b73c69b46a3bfd3f3df3701dfc360a76760a09" Mar 18 10:09:47.066343 master-0 kubenswrapper[8244]: W0318 10:09:47.066311 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda6716938_ca14_4000_b7f1_b60e93e93c0d.slice/crio-027cb739429dc761a3f2ade604437810a5898c43151b24416d6963442db7ad65 WatchSource:0}: Error finding container 027cb739429dc761a3f2ade604437810a5898c43151b24416d6963442db7ad65: Status 404 returned error can't find the container with id 027cb739429dc761a3f2ade604437810a5898c43151b24416d6963442db7ad65 Mar 18 10:09:47.067784 master-0 kubenswrapper[8244]: I0318 10:09:47.067754 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-mw9tt_9f5c64aa-676e-4e48-b714-02f6edb1d361/cluster-autoscaler-operator/0.log" Mar 18 10:09:47.068421 master-0 kubenswrapper[8244]: I0318 10:09:47.068339 8244 generic.go:334] "Generic (PLEG): container finished" podID="9f5c64aa-676e-4e48-b714-02f6edb1d361" containerID="6655987065a30c5bbf651bf96600d36185c30b2a671ea89757e4e505e5002a5d" exitCode=255 Mar 18 10:09:47.068421 master-0 kubenswrapper[8244]: I0318 10:09:47.068397 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" event={"ID":"9f5c64aa-676e-4e48-b714-02f6edb1d361","Type":"ContainerDied","Data":"6655987065a30c5bbf651bf96600d36185c30b2a671ea89757e4e505e5002a5d"} Mar 18 10:09:47.068851 master-0 kubenswrapper[8244]: I0318 10:09:47.068673 8244 scope.go:117] "RemoveContainer" containerID="6655987065a30c5bbf651bf96600d36185c30b2a671ea89757e4e505e5002a5d" Mar 18 10:09:47.071232 master-0 kubenswrapper[8244]: I0318 10:09:47.071039 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-pzqqc_0999f781-3299-4cb6-ba76-2a4f4584c685/kube-controller-manager-operator/1.log" Mar 18 10:09:47.071232 master-0 kubenswrapper[8244]: I0318 10:09:47.071116 8244 generic.go:334] "Generic (PLEG): container finished" podID="0999f781-3299-4cb6-ba76-2a4f4584c685" containerID="bdf23e456932d75fae6cdcf4a2bdaca513da90b17853bb40022bebbd243e87d8" exitCode=0 Mar 18 10:09:47.071374 master-0 kubenswrapper[8244]: I0318 10:09:47.071214 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" event={"ID":"0999f781-3299-4cb6-ba76-2a4f4584c685","Type":"ContainerDied","Data":"bdf23e456932d75fae6cdcf4a2bdaca513da90b17853bb40022bebbd243e87d8"} Mar 18 10:09:47.071692 master-0 kubenswrapper[8244]: I0318 10:09:47.071667 8244 scope.go:117] "RemoveContainer" containerID="bdf23e456932d75fae6cdcf4a2bdaca513da90b17853bb40022bebbd243e87d8" Mar 18 10:09:47.073330 master-0 kubenswrapper[8244]: I0318 10:09:47.073290 8244 generic.go:334] "Generic (PLEG): container finished" podID="2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0" containerID="c8c319ddb107c3bc56c6d9fe6eeed7e7744a57b20e36ccaa20a733dd325d4c8f" exitCode=0 Mar 18 10:09:47.073330 master-0 kubenswrapper[8244]: I0318 10:09:47.073328 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" event={"ID":"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0","Type":"ContainerDied","Data":"c8c319ddb107c3bc56c6d9fe6eeed7e7744a57b20e36ccaa20a733dd325d4c8f"} Mar 18 10:09:47.073687 master-0 kubenswrapper[8244]: I0318 10:09:47.073652 8244 scope.go:117] "RemoveContainer" containerID="c8c319ddb107c3bc56c6d9fe6eeed7e7744a57b20e36ccaa20a733dd325d4c8f" Mar 18 10:09:47.081022 master-0 kubenswrapper[8244]: I0318 10:09:47.080958 8244 scope.go:117] "RemoveContainer" containerID="d7fed381f588321bf949c1ee4979e243946541c605dea6e2da6f26ae56dbca2b" Mar 18 10:09:47.214431 master-0 kubenswrapper[8244]: I0318 10:09:47.214390 8244 scope.go:117] "RemoveContainer" containerID="81cd35f002f1f429688cbe007f6618850051907823664181496568b308ab47bb" Mar 18 10:09:47.292685 master-0 kubenswrapper[8244]: I0318 10:09:47.292648 8244 scope.go:117] "RemoveContainer" containerID="7899027579e9cd9f7fcc12484390d733833facf13d02a5193e75c23ee942e285" Mar 18 10:09:47.402956 master-0 kubenswrapper[8244]: I0318 10:09:47.402904 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:47.402956 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:47.402956 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:47.402956 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:47.403149 master-0 kubenswrapper[8244]: I0318 10:09:47.402983 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:47.418381 master-0 kubenswrapper[8244]: I0318 10:09:47.418343 8244 scope.go:117] "RemoveContainer" containerID="b5bf205c4d2d39a65c5f434aca2db07e6f6c44b756c420c12726c015f7a4b2e6" Mar 18 10:09:47.479551 master-0 kubenswrapper[8244]: I0318 10:09:47.479519 8244 scope.go:117] "RemoveContainer" containerID="2795ecc70fe66ee4a0f920912ba6641b4460a6d001aedb4e015ff801933a203d" Mar 18 10:09:47.513718 master-0 kubenswrapper[8244]: I0318 10:09:47.513667 8244 scope.go:117] "RemoveContainer" containerID="bd5fe04a9ede0b84f18ed45bdc7555eb6593622c877cdf75babe4d3ead617eed" Mar 18 10:09:47.905952 master-0 kubenswrapper[8244]: I0318 10:09:47.905798 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:09:47.905952 master-0 kubenswrapper[8244]: I0318 10:09:47.905866 8244 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:09:48.081032 master-0 kubenswrapper[8244]: I0318 10:09:48.080985 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" event={"ID":"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0","Type":"ContainerStarted","Data":"36938c90494569dfd21016b3cb5da846f858383dad40dee71e0f55e81ec956c5"} Mar 18 10:09:48.083499 master-0 kubenswrapper[8244]: I0318 10:09:48.083461 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" event={"ID":"29490aed-9c97-42d1-94c8-44d1de13b70c","Type":"ContainerStarted","Data":"43d2cbd845dbb8a7b50d04b6cef57abec0431dc512978970dbf3224043fb6c1f"} Mar 18 10:09:48.085957 master-0 kubenswrapper[8244]: I0318 10:09:48.085897 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" event={"ID":"f076eaf0-b041-4db0-ba06-3d85e23bb654","Type":"ContainerStarted","Data":"3024bbe41d6ea2b4b36032c6226fc3bb57b269842627efceecbe9ceb19d09d3a"} Mar 18 10:09:48.087582 master-0 kubenswrapper[8244]: I0318 10:09:48.087557 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" event={"ID":"0999f781-3299-4cb6-ba76-2a4f4584c685","Type":"ContainerStarted","Data":"9e59682d2f7ebb6238a85252bc639f2af30faac6bad1c79c45d7f847c02c7cc5"} Mar 18 10:09:48.089007 master-0 kubenswrapper[8244]: I0318 10:09:48.088987 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" event={"ID":"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4","Type":"ContainerStarted","Data":"36a7297e088f4c6c6d4864be9b977e245431109f91fa714a771afc9e71fad874"} Mar 18 10:09:48.090761 master-0 kubenswrapper[8244]: I0318 10:09:48.090736 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" event={"ID":"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d","Type":"ContainerStarted","Data":"890a35c9e75544981cbb11efe21b82c439f21326c21abe7bb6e440e5194299e3"} Mar 18 10:09:48.091492 master-0 kubenswrapper[8244]: I0318 10:09:48.091471 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:09:48.092946 master-0 kubenswrapper[8244]: I0318 10:09:48.092929 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-mw9tt_9f5c64aa-676e-4e48-b714-02f6edb1d361/cluster-autoscaler-operator/0.log" Mar 18 10:09:48.093196 master-0 kubenswrapper[8244]: I0318 10:09:48.093175 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" event={"ID":"9f5c64aa-676e-4e48-b714-02f6edb1d361","Type":"ContainerStarted","Data":"3e37d2543f4d2fe231871aaa6ef2b5e67db5c8a439d2dfb96f0f3ec6453dd5b8"} Mar 18 10:09:48.094803 master-0 kubenswrapper[8244]: I0318 10:09:48.094784 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" event={"ID":"6a6a616d-012a-479e-ab3d-b21295ea1805","Type":"ContainerStarted","Data":"11f3fd1e48e78b074f5c049712b72c820999148e21e4004513d954160590d8c6"} Mar 18 10:09:48.096549 master-0 kubenswrapper[8244]: I0318 10:09:48.096526 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" event={"ID":"fcf01f63-ed66-4f0d-b2df-97c77bbf8543","Type":"ContainerStarted","Data":"cd5460a46f1af5014f09f3d74c852c3c8e1dbae9dbdc5909c502350cb309005a"} Mar 18 10:09:48.096598 master-0 kubenswrapper[8244]: I0318 10:09:48.096549 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" event={"ID":"fcf01f63-ed66-4f0d-b2df-97c77bbf8543","Type":"ContainerStarted","Data":"918ed1f73d1c1442c0a8e7726a8b614353a7b30844e6305ebc1a1ba857285248"} Mar 18 10:09:48.098345 master-0 kubenswrapper[8244]: I0318 10:09:48.098324 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" event={"ID":"0d72e695-0183-4ee8-8add-5425e67f7138","Type":"ContainerStarted","Data":"59669b95de7870714bf055dad76d2cde594a723fe14aa826fd88109eccea5539"} Mar 18 10:09:48.100411 master-0 kubenswrapper[8244]: I0318 10:09:48.100391 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-xnvn9_29fbc78b-1887-40d4-8165-f0f7cc40b583/machine-api-operator/0.log" Mar 18 10:09:48.100730 master-0 kubenswrapper[8244]: I0318 10:09:48.100710 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" event={"ID":"29fbc78b-1887-40d4-8165-f0f7cc40b583","Type":"ContainerStarted","Data":"caa5b8fa6d8aff4ed1e73cda1242029ca1676c2dd2265608d69188eb077423f4"} Mar 18 10:09:48.102536 master-0 kubenswrapper[8244]: I0318 10:09:48.102508 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"a6716938-ca14-4000-b7f1-b60e93e93c0d","Type":"ContainerStarted","Data":"07f18c8da1828af97eeefd0d942acb995fabaae660b2da8d651807992de76bb4"} Mar 18 10:09:48.102604 master-0 kubenswrapper[8244]: I0318 10:09:48.102539 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"a6716938-ca14-4000-b7f1-b60e93e93c0d","Type":"ContainerStarted","Data":"027cb739429dc761a3f2ade604437810a5898c43151b24416d6963442db7ad65"} Mar 18 10:09:48.104615 master-0 kubenswrapper[8244]: I0318 10:09:48.104589 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" event={"ID":"9ccdc221-4ec5-487e-8ec4-85284ed628d8","Type":"ContainerStarted","Data":"dda13223f5c100715c4be2da0bae2fac35de576e12445cfa09daf19c3cba6e73"} Mar 18 10:09:48.106278 master-0 kubenswrapper[8244]: I0318 10:09:48.106258 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" event={"ID":"ec53d7fa-445b-4e1d-84ef-545f08e80ccc","Type":"ContainerStarted","Data":"16ea285300a471e70a35c5b5db7dd4bc1c50ec6774f999a8e35200b148b70772"} Mar 18 10:09:48.198312 master-0 kubenswrapper[8244]: I0318 10:09:48.198177 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:09:48.200903 master-0 kubenswrapper[8244]: I0318 10:09:48.200083 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" podStartSLOduration=2.200064927 podStartE2EDuration="2.200064927s" podCreationTimestamp="2026-03-18 10:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:09:48.198391206 +0000 UTC m=+904.678127334" watchObservedRunningTime="2026-03-18 10:09:48.200064927 +0000 UTC m=+904.679801055" Mar 18 10:09:48.401214 master-0 kubenswrapper[8244]: I0318 10:09:48.401142 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:48.401214 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:48.401214 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:48.401214 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:48.401536 master-0 kubenswrapper[8244]: I0318 10:09:48.401224 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:48.410459 master-0 kubenswrapper[8244]: I0318 10:09:48.410387 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" podStartSLOduration=2.410352264 podStartE2EDuration="2.410352264s" podCreationTimestamp="2026-03-18 10:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:09:48.410034816 +0000 UTC m=+904.889770954" watchObservedRunningTime="2026-03-18 10:09:48.410352264 +0000 UTC m=+904.890088392" Mar 18 10:09:49.402899 master-0 kubenswrapper[8244]: I0318 10:09:49.402776 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:49.402899 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:49.402899 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:49.402899 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:49.403974 master-0 kubenswrapper[8244]: I0318 10:09:49.402933 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:50.401878 master-0 kubenswrapper[8244]: I0318 10:09:50.401790 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:50.401878 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:50.401878 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:50.401878 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:50.402229 master-0 kubenswrapper[8244]: I0318 10:09:50.401914 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:51.401294 master-0 kubenswrapper[8244]: I0318 10:09:51.401229 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:51.401294 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:51.401294 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:51.401294 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:51.401963 master-0 kubenswrapper[8244]: I0318 10:09:51.401936 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:52.401494 master-0 kubenswrapper[8244]: I0318 10:09:52.401397 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:52.401494 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:52.401494 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:52.401494 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:52.402491 master-0 kubenswrapper[8244]: I0318 10:09:52.401503 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:52.825778 master-0 kubenswrapper[8244]: I0318 10:09:52.825693 8244 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:09:52.834544 master-0 kubenswrapper[8244]: I0318 10:09:52.834500 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:09:53.402088 master-0 kubenswrapper[8244]: I0318 10:09:53.401982 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:53.402088 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:53.402088 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:53.402088 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:53.402088 master-0 kubenswrapper[8244]: I0318 10:09:53.402086 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:54.098293 master-0 kubenswrapper[8244]: I0318 10:09:54.098218 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-retry-1-master-0"] Mar 18 10:09:54.100053 master-0 kubenswrapper[8244]: I0318 10:09:54.100009 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.106467 master-0 kubenswrapper[8244]: I0318 10:09:54.104527 8244 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-244m4" Mar 18 10:09:54.106467 master-0 kubenswrapper[8244]: I0318 10:09:54.106130 8244 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 10:09:54.113003 master-0 kubenswrapper[8244]: I0318 10:09:54.111847 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-retry-1-master-0"] Mar 18 10:09:54.131809 master-0 kubenswrapper[8244]: I0318 10:09:54.131740 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.132089 master-0 kubenswrapper[8244]: I0318 10:09:54.131918 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.132089 master-0 kubenswrapper[8244]: I0318 10:09:54.132010 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.233285 master-0 kubenswrapper[8244]: I0318 10:09:54.233160 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.233285 master-0 kubenswrapper[8244]: I0318 10:09:54.233278 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.233662 master-0 kubenswrapper[8244]: I0318 10:09:54.233307 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.233662 master-0 kubenswrapper[8244]: I0318 10:09:54.233379 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.233662 master-0 kubenswrapper[8244]: I0318 10:09:54.233470 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.263567 master-0 kubenswrapper[8244]: I0318 10:09:54.262037 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.401575 master-0 kubenswrapper[8244]: I0318 10:09:54.401399 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:54.401575 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:54.401575 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:54.401575 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:54.401575 master-0 kubenswrapper[8244]: I0318 10:09:54.401513 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:54.448451 master-0 kubenswrapper[8244]: I0318 10:09:54.448342 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:09:54.734166 master-0 kubenswrapper[8244]: I0318 10:09:54.733965 8244 scope.go:117] "RemoveContainer" containerID="c0b6e3b46ac87b79d91e8ba9d05e392b0a7e135e1b0676e08c471b66babdb7f6" Mar 18 10:09:54.934030 master-0 kubenswrapper[8244]: I0318 10:09:54.933972 8244 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-retry-1-master-0"] Mar 18 10:09:55.169523 master-0 kubenswrapper[8244]: I0318 10:09:55.169452 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/2.log" Mar 18 10:09:55.169901 master-0 kubenswrapper[8244]: I0318 10:09:55.169843 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" event={"ID":"1084562a-20a0-432d-b739-90bc0a4daff2","Type":"ContainerStarted","Data":"7c10398724db46c6f60581fe1713892f0b8db0296b1603e1fe8494cc2e0d1fe8"} Mar 18 10:09:55.172348 master-0 kubenswrapper[8244]: I0318 10:09:55.172300 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" event={"ID":"a3657106-1eea-4031-8c92-85ba6287b425","Type":"ContainerStarted","Data":"3acdf5b69c1ce66294030ac402e9c8e09366d47522c5ff94a22e2363f49e4024"} Mar 18 10:09:55.402418 master-0 kubenswrapper[8244]: I0318 10:09:55.402223 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:55.402418 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:55.402418 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:55.402418 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:55.402418 master-0 kubenswrapper[8244]: I0318 10:09:55.402318 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:56.184561 master-0 kubenswrapper[8244]: I0318 10:09:56.184457 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" event={"ID":"a3657106-1eea-4031-8c92-85ba6287b425","Type":"ContainerStarted","Data":"06c0be19470a9053df1e868da4f3dfc9b3f3db58cf48affc02d1dbbb79a51995"} Mar 18 10:09:56.218835 master-0 kubenswrapper[8244]: I0318 10:09:56.218739 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" podStartSLOduration=2.218720099 podStartE2EDuration="2.218720099s" podCreationTimestamp="2026-03-18 10:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:09:56.20950883 +0000 UTC m=+912.689245038" watchObservedRunningTime="2026-03-18 10:09:56.218720099 +0000 UTC m=+912.698456227" Mar 18 10:09:56.401432 master-0 kubenswrapper[8244]: I0318 10:09:56.401365 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:56.401432 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:56.401432 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:56.401432 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:56.401963 master-0 kubenswrapper[8244]: I0318 10:09:56.401438 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:57.401855 master-0 kubenswrapper[8244]: I0318 10:09:57.401763 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:57.401855 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:57.401855 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:57.401855 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:57.403085 master-0 kubenswrapper[8244]: I0318 10:09:57.403041 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:58.401547 master-0 kubenswrapper[8244]: I0318 10:09:58.401448 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:58.401547 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:58.401547 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:58.401547 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:58.401547 master-0 kubenswrapper[8244]: I0318 10:09:58.401529 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:09:59.401707 master-0 kubenswrapper[8244]: I0318 10:09:59.401611 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:09:59.401707 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:09:59.401707 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:09:59.401707 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:09:59.402616 master-0 kubenswrapper[8244]: I0318 10:09:59.401716 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:00.402241 master-0 kubenswrapper[8244]: I0318 10:10:00.402173 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:00.402241 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:00.402241 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:00.402241 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:00.402241 master-0 kubenswrapper[8244]: I0318 10:10:00.402251 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:01.401677 master-0 kubenswrapper[8244]: I0318 10:10:01.401593 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:01.401677 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:01.401677 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:01.401677 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:01.401677 master-0 kubenswrapper[8244]: I0318 10:10:01.401675 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:02.400042 master-0 kubenswrapper[8244]: I0318 10:10:02.400005 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:02.400042 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:02.400042 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:02.400042 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:02.400403 master-0 kubenswrapper[8244]: I0318 10:10:02.400381 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:03.401305 master-0 kubenswrapper[8244]: I0318 10:10:03.401200 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:03.401305 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:03.401305 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:03.401305 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:03.401305 master-0 kubenswrapper[8244]: I0318 10:10:03.401280 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:04.402284 master-0 kubenswrapper[8244]: I0318 10:10:04.402198 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:04.402284 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:04.402284 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:04.402284 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:04.402922 master-0 kubenswrapper[8244]: I0318 10:10:04.402315 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:05.401197 master-0 kubenswrapper[8244]: I0318 10:10:05.401117 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:05.401197 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:05.401197 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:05.401197 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:05.401471 master-0 kubenswrapper[8244]: I0318 10:10:05.401198 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:06.402430 master-0 kubenswrapper[8244]: I0318 10:10:06.402356 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:06.402430 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:06.402430 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:06.402430 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:06.403417 master-0 kubenswrapper[8244]: I0318 10:10:06.402440 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:06.796582 master-0 kubenswrapper[8244]: I0318 10:10:06.796526 8244 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 10:10:07.401491 master-0 kubenswrapper[8244]: I0318 10:10:07.401386 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:07.401491 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:07.401491 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:07.401491 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:07.402213 master-0 kubenswrapper[8244]: I0318 10:10:07.401494 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:08.401779 master-0 kubenswrapper[8244]: I0318 10:10:08.401683 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:08.401779 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:08.401779 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:08.401779 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:08.403169 master-0 kubenswrapper[8244]: I0318 10:10:08.401802 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:09.402000 master-0 kubenswrapper[8244]: I0318 10:10:09.401892 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:09.402000 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:09.402000 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:09.402000 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:09.403050 master-0 kubenswrapper[8244]: I0318 10:10:09.402001 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:10.402273 master-0 kubenswrapper[8244]: I0318 10:10:10.402175 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:10.402273 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:10.402273 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:10.402273 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:10.402273 master-0 kubenswrapper[8244]: I0318 10:10:10.402243 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:11.403196 master-0 kubenswrapper[8244]: I0318 10:10:11.403080 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:11.403196 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:11.403196 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:11.403196 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:11.404440 master-0 kubenswrapper[8244]: I0318 10:10:11.403191 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:12.402646 master-0 kubenswrapper[8244]: I0318 10:10:12.402516 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:12.402646 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:12.402646 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:12.402646 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:12.402646 master-0 kubenswrapper[8244]: I0318 10:10:12.402639 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:13.401810 master-0 kubenswrapper[8244]: I0318 10:10:13.401721 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:13.401810 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:13.401810 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:13.401810 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:13.402792 master-0 kubenswrapper[8244]: I0318 10:10:13.401852 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:14.401152 master-0 kubenswrapper[8244]: I0318 10:10:14.401084 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:14.401152 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:14.401152 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:14.401152 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:14.401547 master-0 kubenswrapper[8244]: I0318 10:10:14.401164 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:14.734457 master-0 kubenswrapper[8244]: I0318 10:10:14.733219 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:10:14.734457 master-0 kubenswrapper[8244]: I0318 10:10:14.733259 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:10:14.756022 master-0 kubenswrapper[8244]: I0318 10:10:14.755962 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 10:10:14.760522 master-0 kubenswrapper[8244]: I0318 10:10:14.760483 8244 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 18 10:10:14.770303 master-0 kubenswrapper[8244]: I0318 10:10:14.770250 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 10:10:14.796291 master-0 kubenswrapper[8244]: I0318 10:10:14.796220 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 10:10:15.346450 master-0 kubenswrapper[8244]: I0318 10:10:15.346365 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:10:15.346450 master-0 kubenswrapper[8244]: I0318 10:10:15.346402 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9d397d3f-2d40-469c-b5a4-f278c504932a" Mar 18 10:10:15.407497 master-0 kubenswrapper[8244]: I0318 10:10:15.407443 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:15.407497 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:15.407497 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:15.407497 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:15.408060 master-0 kubenswrapper[8244]: I0318 10:10:15.407522 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:16.402287 master-0 kubenswrapper[8244]: I0318 10:10:16.402169 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:16.402287 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:16.402287 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:16.402287 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:16.403310 master-0 kubenswrapper[8244]: I0318 10:10:16.402308 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:17.404255 master-0 kubenswrapper[8244]: I0318 10:10:17.404190 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:17.404255 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:17.404255 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:17.404255 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:17.404255 master-0 kubenswrapper[8244]: I0318 10:10:17.404255 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:18.403041 master-0 kubenswrapper[8244]: I0318 10:10:18.402971 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:18.403041 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:18.403041 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:18.403041 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:18.403041 master-0 kubenswrapper[8244]: I0318 10:10:18.403040 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:18.529596 master-0 kubenswrapper[8244]: I0318 10:10:18.529434 8244 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:10:18.530469 master-0 kubenswrapper[8244]: I0318 10:10:18.530239 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-cert-syncer" containerID="cri-o://e73e9ab6250891a74742cf894dfa6d6f12c07f81c7c6e29abf71445a93b042c6" gracePeriod=30 Mar 18 10:10:18.530469 master-0 kubenswrapper[8244]: I0318 10:10:18.530272 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-recovery-controller" containerID="cri-o://504c7c58af279fedab2f56000cc691abf8096faa6bf0c02f961583e20a138ed6" gracePeriod=30 Mar 18 10:10:18.530469 master-0 kubenswrapper[8244]: I0318 10:10:18.530250 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" containerID="cri-o://0f4bf1dfc4a190fd3410aa065645689966e325eb73cf7788b53ae0a9bf57f3cc" gracePeriod=30 Mar 18 10:10:18.531659 master-0 kubenswrapper[8244]: I0318 10:10:18.531229 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:10:18.531734 master-0 kubenswrapper[8244]: E0318 10:10:18.531702 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="wait-for-host-port" Mar 18 10:10:18.531734 master-0 kubenswrapper[8244]: I0318 10:10:18.531727 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="wait-for-host-port" Mar 18 10:10:18.531896 master-0 kubenswrapper[8244]: E0318 10:10:18.531775 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-recovery-controller" Mar 18 10:10:18.531896 master-0 kubenswrapper[8244]: I0318 10:10:18.531793 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-recovery-controller" Mar 18 10:10:18.532029 master-0 kubenswrapper[8244]: E0318 10:10:18.531903 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" Mar 18 10:10:18.532029 master-0 kubenswrapper[8244]: I0318 10:10:18.531922 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" Mar 18 10:10:18.532029 master-0 kubenswrapper[8244]: E0318 10:10:18.531940 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-cert-syncer" Mar 18 10:10:18.532029 master-0 kubenswrapper[8244]: I0318 10:10:18.531953 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-cert-syncer" Mar 18 10:10:18.532294 master-0 kubenswrapper[8244]: I0318 10:10:18.532253 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" Mar 18 10:10:18.532366 master-0 kubenswrapper[8244]: I0318 10:10:18.532307 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-cert-syncer" Mar 18 10:10:18.532366 master-0 kubenswrapper[8244]: I0318 10:10:18.532330 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-recovery-controller" Mar 18 10:10:18.532714 master-0 kubenswrapper[8244]: E0318 10:10:18.532662 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" Mar 18 10:10:18.532714 master-0 kubenswrapper[8244]: I0318 10:10:18.532699 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" Mar 18 10:10:18.533099 master-0 kubenswrapper[8244]: I0318 10:10:18.533049 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" Mar 18 10:10:18.656801 master-0 kubenswrapper[8244]: I0318 10:10:18.656723 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:18.657033 master-0 kubenswrapper[8244]: I0318 10:10:18.656876 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:18.735731 master-0 kubenswrapper[8244]: I0318 10:10:18.735664 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler-cert-syncer/0.log" Mar 18 10:10:18.736953 master-0 kubenswrapper[8244]: I0318 10:10:18.736891 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler/0.log" Mar 18 10:10:18.738345 master-0 kubenswrapper[8244]: I0318 10:10:18.738295 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:18.761975 master-0 kubenswrapper[8244]: I0318 10:10:18.759759 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:18.761975 master-0 kubenswrapper[8244]: I0318 10:10:18.759912 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:18.761975 master-0 kubenswrapper[8244]: I0318 10:10:18.760060 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:18.761975 master-0 kubenswrapper[8244]: I0318 10:10:18.760201 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:18.779645 master-0 kubenswrapper[8244]: I0318 10:10:18.779589 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8e27b7d086edf5d2cf47b703574641d8" podUID="11a2f93448b9d54da9854663936e2b73" Mar 18 10:10:18.784660 master-0 kubenswrapper[8244]: I0318 10:10:18.784557 8244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=4.784530854 podStartE2EDuration="4.784530854s" podCreationTimestamp="2026-03-18 10:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:10:18.773273794 +0000 UTC m=+935.253009922" watchObservedRunningTime="2026-03-18 10:10:18.784530854 +0000 UTC m=+935.264267002" Mar 18 10:10:18.860734 master-0 kubenswrapper[8244]: I0318 10:10:18.860657 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"8e27b7d086edf5d2cf47b703574641d8\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " Mar 18 10:10:18.861170 master-0 kubenswrapper[8244]: I0318 10:10:18.860757 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e27b7d086edf5d2cf47b703574641d8" (UID: "8e27b7d086edf5d2cf47b703574641d8"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:18.861170 master-0 kubenswrapper[8244]: I0318 10:10:18.860793 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"8e27b7d086edf5d2cf47b703574641d8\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " Mar 18 10:10:18.861170 master-0 kubenswrapper[8244]: I0318 10:10:18.860853 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e27b7d086edf5d2cf47b703574641d8" (UID: "8e27b7d086edf5d2cf47b703574641d8"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:18.861170 master-0 kubenswrapper[8244]: I0318 10:10:18.861138 8244 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:18.861170 master-0 kubenswrapper[8244]: I0318 10:10:18.861159 8244 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:19.386549 master-0 kubenswrapper[8244]: I0318 10:10:19.386469 8244 generic.go:334] "Generic (PLEG): container finished" podID="fcf01f63-ed66-4f0d-b2df-97c77bbf8543" containerID="cd5460a46f1af5014f09f3d74c852c3c8e1dbae9dbdc5909c502350cb309005a" exitCode=0 Mar 18 10:10:19.386911 master-0 kubenswrapper[8244]: I0318 10:10:19.386535 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" event={"ID":"fcf01f63-ed66-4f0d-b2df-97c77bbf8543","Type":"ContainerDied","Data":"cd5460a46f1af5014f09f3d74c852c3c8e1dbae9dbdc5909c502350cb309005a"} Mar 18 10:10:19.392807 master-0 kubenswrapper[8244]: I0318 10:10:19.392743 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler-cert-syncer/0.log" Mar 18 10:10:19.394732 master-0 kubenswrapper[8244]: I0318 10:10:19.394684 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler/0.log" Mar 18 10:10:19.395436 master-0 kubenswrapper[8244]: I0318 10:10:19.395397 8244 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="0f4bf1dfc4a190fd3410aa065645689966e325eb73cf7788b53ae0a9bf57f3cc" exitCode=0 Mar 18 10:10:19.395436 master-0 kubenswrapper[8244]: I0318 10:10:19.395433 8244 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="504c7c58af279fedab2f56000cc691abf8096faa6bf0c02f961583e20a138ed6" exitCode=0 Mar 18 10:10:19.395605 master-0 kubenswrapper[8244]: I0318 10:10:19.395453 8244 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="e73e9ab6250891a74742cf894dfa6d6f12c07f81c7c6e29abf71445a93b042c6" exitCode=2 Mar 18 10:10:19.395605 master-0 kubenswrapper[8244]: I0318 10:10:19.395495 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="175a7f574cdd0bb033854cd54eafd3c786bd342ffc7ec8cd013b6215f3ca1994" Mar 18 10:10:19.395605 master-0 kubenswrapper[8244]: I0318 10:10:19.395518 8244 scope.go:117] "RemoveContainer" containerID="c508677fa84c67b31ad63db19f2ce6332119259b51c9ae7aa95d7b13079c3837" Mar 18 10:10:19.395605 master-0 kubenswrapper[8244]: I0318 10:10:19.395548 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:19.406948 master-0 kubenswrapper[8244]: I0318 10:10:19.406079 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:19.406948 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:19.406948 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:19.406948 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:19.409895 master-0 kubenswrapper[8244]: I0318 10:10:19.409773 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:19.414875 master-0 kubenswrapper[8244]: I0318 10:10:19.414757 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8e27b7d086edf5d2cf47b703574641d8" podUID="11a2f93448b9d54da9854663936e2b73" Mar 18 10:10:19.434375 master-0 kubenswrapper[8244]: I0318 10:10:19.434315 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8e27b7d086edf5d2cf47b703574641d8" podUID="11a2f93448b9d54da9854663936e2b73" Mar 18 10:10:19.751111 master-0 kubenswrapper[8244]: I0318 10:10:19.750963 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e27b7d086edf5d2cf47b703574641d8" path="/var/lib/kubelet/pods/8e27b7d086edf5d2cf47b703574641d8/volumes" Mar 18 10:10:20.222467 master-0 kubenswrapper[8244]: I0318 10:10:20.222397 8244 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:10:20.223861 master-0 kubenswrapper[8244]: I0318 10:10:20.223720 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://8a062b1b85a12fd918c3c62a85847e5a60612517f0ee750aabe64bd125668daf" gracePeriod=30 Mar 18 10:10:20.224218 master-0 kubenswrapper[8244]: I0318 10:10:20.223792 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" containerID="cri-o://fce78d10ab44ad6e3870abc2e19feeb6f5ae7acb96a08b13653663840e0cbb1b" gracePeriod=30 Mar 18 10:10:20.224351 master-0 kubenswrapper[8244]: I0318 10:10:20.223904 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://eeb871e8e559b9fd82b985e8a38853c6cc1a0962899e9d61d0017f002e610d41" gracePeriod=30 Mar 18 10:10:20.224473 master-0 kubenswrapper[8244]: I0318 10:10:20.223747 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" containerID="cri-o://7ca73c96270bb01e4b2a501f5fca8a82d6d3109e114172103ea987822829d77c" gracePeriod=30 Mar 18 10:10:20.224747 master-0 kubenswrapper[8244]: I0318 10:10:20.224672 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: E0318 10:10:20.225113 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225141 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: E0318 10:10:20.225175 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225188 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: E0318 10:10:20.225213 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225227 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: E0318 10:10:20.225242 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225254 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: E0318 10:10:20.225279 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225290 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: E0318 10:10:20.225312 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225324 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: E0318 10:10:20.225350 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-cert-syncer" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225364 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-cert-syncer" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: E0318 10:10:20.225379 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-recovery-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225391 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-recovery-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225585 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225625 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-cert-syncer" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225650 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225670 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225693 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-recovery-controller" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225720 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" Mar 18 10:10:20.225938 master-0 kubenswrapper[8244]: I0318 10:10:20.225756 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.227712 master-0 kubenswrapper[8244]: E0318 10:10:20.225996 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.227712 master-0 kubenswrapper[8244]: I0318 10:10:20.226012 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.227712 master-0 kubenswrapper[8244]: I0318 10:10:20.226234 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.227712 master-0 kubenswrapper[8244]: I0318 10:10:20.226278 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:10:20.284344 master-0 kubenswrapper[8244]: I0318 10:10:20.284275 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:20.284778 master-0 kubenswrapper[8244]: I0318 10:10:20.284574 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:20.386223 master-0 kubenswrapper[8244]: I0318 10:10:20.386168 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:20.386376 master-0 kubenswrapper[8244]: I0318 10:10:20.386303 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:20.386509 master-0 kubenswrapper[8244]: I0318 10:10:20.386399 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:20.386603 master-0 kubenswrapper[8244]: I0318 10:10:20.386453 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:20.402496 master-0 kubenswrapper[8244]: I0318 10:10:20.402420 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:20.402496 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:20.402496 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:20.402496 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:20.402939 master-0 kubenswrapper[8244]: I0318 10:10:20.402530 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:20.412486 master-0 kubenswrapper[8244]: I0318 10:10:20.412439 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler-cert-syncer/0.log" Mar 18 10:10:20.418128 master-0 kubenswrapper[8244]: I0318 10:10:20.418062 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/3.log" Mar 18 10:10:20.420468 master-0 kubenswrapper[8244]: I0318 10:10:20.420412 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/cluster-policy-controller/3.log" Mar 18 10:10:20.420958 master-0 kubenswrapper[8244]: I0318 10:10:20.420914 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager-cert-syncer/0.log" Mar 18 10:10:20.421668 master-0 kubenswrapper[8244]: I0318 10:10:20.421635 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:10:20.421773 master-0 kubenswrapper[8244]: I0318 10:10:20.421683 8244 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="7ca73c96270bb01e4b2a501f5fca8a82d6d3109e114172103ea987822829d77c" exitCode=0 Mar 18 10:10:20.421773 master-0 kubenswrapper[8244]: I0318 10:10:20.421710 8244 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="fce78d10ab44ad6e3870abc2e19feeb6f5ae7acb96a08b13653663840e0cbb1b" exitCode=0 Mar 18 10:10:20.421773 master-0 kubenswrapper[8244]: I0318 10:10:20.421722 8244 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="eeb871e8e559b9fd82b985e8a38853c6cc1a0962899e9d61d0017f002e610d41" exitCode=0 Mar 18 10:10:20.421773 master-0 kubenswrapper[8244]: I0318 10:10:20.421732 8244 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="8a062b1b85a12fd918c3c62a85847e5a60612517f0ee750aabe64bd125668daf" exitCode=2 Mar 18 10:10:20.421773 master-0 kubenswrapper[8244]: I0318 10:10:20.421757 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4eb3bb67999d4fed39987c312beb2bc06f47fac3b7fcdfdc48994c77752b8ad" Mar 18 10:10:20.422183 master-0 kubenswrapper[8244]: I0318 10:10:20.421805 8244 scope.go:117] "RemoveContainer" containerID="f1f4785a8e07522509ea4dcc453b0c2d3e2548d2f8a5d0fc72eb3f727a1c3d90" Mar 18 10:10:20.423105 master-0 kubenswrapper[8244]: I0318 10:10:20.423025 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager-cert-syncer/0.log" Mar 18 10:10:20.424201 master-0 kubenswrapper[8244]: I0318 10:10:20.424079 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager/0.log" Mar 18 10:10:20.424201 master-0 kubenswrapper[8244]: I0318 10:10:20.424161 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:20.429137 master-0 kubenswrapper[8244]: I0318 10:10:20.429017 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="af8e875368eec13e995ea08015e08c42" podUID="3ddfa5bb627414042dcc2d2204092c5a" Mar 18 10:10:20.454267 master-0 kubenswrapper[8244]: I0318 10:10:20.454219 8244 scope.go:117] "RemoveContainer" containerID="b5440fd92f867438da48c59f39988e512f02a0b7141abc1139ed7de105e95766" Mar 18 10:10:20.589018 master-0 kubenswrapper[8244]: I0318 10:10:20.588911 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-cert-dir\") pod \"af8e875368eec13e995ea08015e08c42\" (UID: \"af8e875368eec13e995ea08015e08c42\") " Mar 18 10:10:20.589018 master-0 kubenswrapper[8244]: I0318 10:10:20.589020 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-resource-dir\") pod \"af8e875368eec13e995ea08015e08c42\" (UID: \"af8e875368eec13e995ea08015e08c42\") " Mar 18 10:10:20.589454 master-0 kubenswrapper[8244]: I0318 10:10:20.589080 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "af8e875368eec13e995ea08015e08c42" (UID: "af8e875368eec13e995ea08015e08c42"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:20.589454 master-0 kubenswrapper[8244]: I0318 10:10:20.589189 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "af8e875368eec13e995ea08015e08c42" (UID: "af8e875368eec13e995ea08015e08c42"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:20.589752 master-0 kubenswrapper[8244]: I0318 10:10:20.589695 8244 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:20.589882 master-0 kubenswrapper[8244]: I0318 10:10:20.589753 8244 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/af8e875368eec13e995ea08015e08c42-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:20.758618 master-0 kubenswrapper[8244]: I0318 10:10:20.758568 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:10:20.793102 master-0 kubenswrapper[8244]: I0318 10:10:20.793049 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-var-lock\") pod \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " Mar 18 10:10:20.793102 master-0 kubenswrapper[8244]: I0318 10:10:20.793103 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kubelet-dir\") pod \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " Mar 18 10:10:20.793443 master-0 kubenswrapper[8244]: I0318 10:10:20.793140 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kube-api-access\") pod \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\" (UID: \"fcf01f63-ed66-4f0d-b2df-97c77bbf8543\") " Mar 18 10:10:20.793443 master-0 kubenswrapper[8244]: I0318 10:10:20.793206 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-var-lock" (OuterVolumeSpecName: "var-lock") pod "fcf01f63-ed66-4f0d-b2df-97c77bbf8543" (UID: "fcf01f63-ed66-4f0d-b2df-97c77bbf8543"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:20.793443 master-0 kubenswrapper[8244]: I0318 10:10:20.793249 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fcf01f63-ed66-4f0d-b2df-97c77bbf8543" (UID: "fcf01f63-ed66-4f0d-b2df-97c77bbf8543"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:20.793619 master-0 kubenswrapper[8244]: I0318 10:10:20.793492 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:20.793619 master-0 kubenswrapper[8244]: I0318 10:10:20.793516 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:20.796307 master-0 kubenswrapper[8244]: I0318 10:10:20.796258 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fcf01f63-ed66-4f0d-b2df-97c77bbf8543" (UID: "fcf01f63-ed66-4f0d-b2df-97c77bbf8543"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:10:20.894163 master-0 kubenswrapper[8244]: I0318 10:10:20.894099 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fcf01f63-ed66-4f0d-b2df-97c77bbf8543-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:21.401423 master-0 kubenswrapper[8244]: I0318 10:10:21.401363 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:21.401423 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:21.401423 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:21.401423 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:21.401739 master-0 kubenswrapper[8244]: I0318 10:10:21.401435 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:21.429402 master-0 kubenswrapper[8244]: I0318 10:10:21.428979 8244 generic.go:334] "Generic (PLEG): container finished" podID="a6716938-ca14-4000-b7f1-b60e93e93c0d" containerID="07f18c8da1828af97eeefd0d942acb995fabaae660b2da8d651807992de76bb4" exitCode=0 Mar 18 10:10:21.429402 master-0 kubenswrapper[8244]: I0318 10:10:21.429039 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"a6716938-ca14-4000-b7f1-b60e93e93c0d","Type":"ContainerDied","Data":"07f18c8da1828af97eeefd0d942acb995fabaae660b2da8d651807992de76bb4"} Mar 18 10:10:21.434888 master-0 kubenswrapper[8244]: I0318 10:10:21.431406 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:10:21.434888 master-0 kubenswrapper[8244]: I0318 10:10:21.431430 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" event={"ID":"fcf01f63-ed66-4f0d-b2df-97c77bbf8543","Type":"ContainerDied","Data":"918ed1f73d1c1442c0a8e7726a8b614353a7b30844e6305ebc1a1ba857285248"} Mar 18 10:10:21.434888 master-0 kubenswrapper[8244]: I0318 10:10:21.431478 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918ed1f73d1c1442c0a8e7726a8b614353a7b30844e6305ebc1a1ba857285248" Mar 18 10:10:21.439303 master-0 kubenswrapper[8244]: I0318 10:10:21.439243 8244 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager-cert-syncer/0.log" Mar 18 10:10:21.439499 master-0 kubenswrapper[8244]: I0318 10:10:21.439452 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:21.462347 master-0 kubenswrapper[8244]: I0318 10:10:21.462300 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="af8e875368eec13e995ea08015e08c42" podUID="3ddfa5bb627414042dcc2d2204092c5a" Mar 18 10:10:21.474923 master-0 kubenswrapper[8244]: I0318 10:10:21.474881 8244 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="af8e875368eec13e995ea08015e08c42" podUID="3ddfa5bb627414042dcc2d2204092c5a" Mar 18 10:10:21.746165 master-0 kubenswrapper[8244]: I0318 10:10:21.746076 8244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8e875368eec13e995ea08015e08c42" path="/var/lib/kubelet/pods/af8e875368eec13e995ea08015e08c42/volumes" Mar 18 10:10:22.401939 master-0 kubenswrapper[8244]: I0318 10:10:22.401867 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:22.401939 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:22.401939 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:22.401939 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:22.401939 master-0 kubenswrapper[8244]: I0318 10:10:22.401931 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:22.871670 master-0 kubenswrapper[8244]: I0318 10:10:22.871612 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:10:23.024578 master-0 kubenswrapper[8244]: I0318 10:10:23.024504 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6716938-ca14-4000-b7f1-b60e93e93c0d-kube-api-access\") pod \"a6716938-ca14-4000-b7f1-b60e93e93c0d\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " Mar 18 10:10:23.024770 master-0 kubenswrapper[8244]: I0318 10:10:23.024621 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-var-lock\") pod \"a6716938-ca14-4000-b7f1-b60e93e93c0d\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " Mar 18 10:10:23.024770 master-0 kubenswrapper[8244]: I0318 10:10:23.024672 8244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-kubelet-dir\") pod \"a6716938-ca14-4000-b7f1-b60e93e93c0d\" (UID: \"a6716938-ca14-4000-b7f1-b60e93e93c0d\") " Mar 18 10:10:23.024922 master-0 kubenswrapper[8244]: I0318 10:10:23.024861 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-var-lock" (OuterVolumeSpecName: "var-lock") pod "a6716938-ca14-4000-b7f1-b60e93e93c0d" (UID: "a6716938-ca14-4000-b7f1-b60e93e93c0d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:23.024986 master-0 kubenswrapper[8244]: I0318 10:10:23.024926 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a6716938-ca14-4000-b7f1-b60e93e93c0d" (UID: "a6716938-ca14-4000-b7f1-b60e93e93c0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:23.025350 master-0 kubenswrapper[8244]: I0318 10:10:23.025264 8244 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:23.025350 master-0 kubenswrapper[8244]: I0318 10:10:23.025327 8244 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6716938-ca14-4000-b7f1-b60e93e93c0d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:23.029327 master-0 kubenswrapper[8244]: I0318 10:10:23.029276 8244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6716938-ca14-4000-b7f1-b60e93e93c0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a6716938-ca14-4000-b7f1-b60e93e93c0d" (UID: "a6716938-ca14-4000-b7f1-b60e93e93c0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:10:23.126473 master-0 kubenswrapper[8244]: I0318 10:10:23.126276 8244 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6716938-ca14-4000-b7f1-b60e93e93c0d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:23.403808 master-0 kubenswrapper[8244]: I0318 10:10:23.403595 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:23.403808 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:23.403808 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:23.403808 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:23.403808 master-0 kubenswrapper[8244]: I0318 10:10:23.403756 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:23.459725 master-0 kubenswrapper[8244]: I0318 10:10:23.459637 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"a6716938-ca14-4000-b7f1-b60e93e93c0d","Type":"ContainerDied","Data":"027cb739429dc761a3f2ade604437810a5898c43151b24416d6963442db7ad65"} Mar 18 10:10:23.460104 master-0 kubenswrapper[8244]: I0318 10:10:23.459735 8244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027cb739429dc761a3f2ade604437810a5898c43151b24416d6963442db7ad65" Mar 18 10:10:23.460104 master-0 kubenswrapper[8244]: I0318 10:10:23.459747 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:10:24.402549 master-0 kubenswrapper[8244]: I0318 10:10:24.402120 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:24.402549 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:24.402549 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:24.402549 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:24.402549 master-0 kubenswrapper[8244]: I0318 10:10:24.402247 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:25.403462 master-0 kubenswrapper[8244]: I0318 10:10:25.403352 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:25.403462 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:25.403462 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:25.403462 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:25.404588 master-0 kubenswrapper[8244]: I0318 10:10:25.403492 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:26.403610 master-0 kubenswrapper[8244]: I0318 10:10:26.403497 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:26.403610 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:26.403610 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:26.403610 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:26.403610 master-0 kubenswrapper[8244]: I0318 10:10:26.403603 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:27.402141 master-0 kubenswrapper[8244]: I0318 10:10:27.402050 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:27.402141 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:27.402141 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:27.402141 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:27.405267 master-0 kubenswrapper[8244]: I0318 10:10:27.402147 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:28.402458 master-0 kubenswrapper[8244]: I0318 10:10:28.402384 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:28.402458 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:28.402458 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:28.402458 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:28.402458 master-0 kubenswrapper[8244]: I0318 10:10:28.402453 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:29.402642 master-0 kubenswrapper[8244]: I0318 10:10:29.402571 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:29.402642 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:29.402642 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:29.402642 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:29.402642 master-0 kubenswrapper[8244]: I0318 10:10:29.402642 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:30.402869 master-0 kubenswrapper[8244]: I0318 10:10:30.402727 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:30.402869 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:30.402869 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:30.402869 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:30.403894 master-0 kubenswrapper[8244]: I0318 10:10:30.402888 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:31.402277 master-0 kubenswrapper[8244]: I0318 10:10:31.402196 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:31.402277 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:31.402277 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:31.402277 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:31.402641 master-0 kubenswrapper[8244]: I0318 10:10:31.402286 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:31.732744 master-0 kubenswrapper[8244]: I0318 10:10:31.732597 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:31.766369 master-0 kubenswrapper[8244]: I0318 10:10:31.766263 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4157fab0-c235-4919-a3e0-586b42d46aae" Mar 18 10:10:31.766369 master-0 kubenswrapper[8244]: I0318 10:10:31.766364 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4157fab0-c235-4919-a3e0-586b42d46aae" Mar 18 10:10:31.784465 master-0 kubenswrapper[8244]: I0318 10:10:31.784397 8244 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:31.788993 master-0 kubenswrapper[8244]: I0318 10:10:31.788923 8244 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:10:31.797298 master-0 kubenswrapper[8244]: I0318 10:10:31.797210 8244 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:10:31.804327 master-0 kubenswrapper[8244]: I0318 10:10:31.804255 8244 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:10:31.805974 master-0 kubenswrapper[8244]: I0318 10:10:31.805936 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:32.401455 master-0 kubenswrapper[8244]: I0318 10:10:32.401377 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:32.401455 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:32.401455 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:32.401455 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:32.401720 master-0 kubenswrapper[8244]: I0318 10:10:32.401485 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:32.544153 master-0 kubenswrapper[8244]: I0318 10:10:32.544072 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e"} Mar 18 10:10:32.544421 master-0 kubenswrapper[8244]: I0318 10:10:32.544161 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"fdfbe791c7dc81669c0055767b2119c9a2cf184b178248ae50fb983ef7ccd9a8"} Mar 18 10:10:32.544421 master-0 kubenswrapper[8244]: I0318 10:10:32.544178 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"fd600b9af2d2390bce62bac606740fc4a23373db916a45bc5361be1ed164fee1"} Mar 18 10:10:33.405275 master-0 kubenswrapper[8244]: I0318 10:10:33.405151 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:33.405275 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:33.405275 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:33.405275 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:33.405275 master-0 kubenswrapper[8244]: I0318 10:10:33.405254 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:33.409051 master-0 kubenswrapper[8244]: I0318 10:10:33.408995 8244 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 10:10:33.409274 master-0 kubenswrapper[8244]: I0318 10:10:33.409238 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" containerID="cri-o://5a898e220fc5eed6a4a32559913535749eb16cc2a7cd17e978e4c62aa7e6452a" gracePeriod=15 Mar 18 10:10:33.409404 master-0 kubenswrapper[8244]: I0318 10:10:33.409372 8244 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f94e501b0ad12236c03bc538f983952a18a8058deb0777210379742bce193fde" gracePeriod=15 Mar 18 10:10:33.411456 master-0 kubenswrapper[8244]: I0318 10:10:33.411417 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 10:10:33.411679 master-0 kubenswrapper[8244]: E0318 10:10:33.411655 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 10:10:33.411679 master-0 kubenswrapper[8244]: I0318 10:10:33.411672 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 10:10:33.411890 master-0 kubenswrapper[8244]: E0318 10:10:33.411689 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcf01f63-ed66-4f0d-b2df-97c77bbf8543" containerName="installer" Mar 18 10:10:33.411890 master-0 kubenswrapper[8244]: I0318 10:10:33.411697 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcf01f63-ed66-4f0d-b2df-97c77bbf8543" containerName="installer" Mar 18 10:10:33.411890 master-0 kubenswrapper[8244]: E0318 10:10:33.411714 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 10:10:33.411890 master-0 kubenswrapper[8244]: I0318 10:10:33.411719 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 10:10:33.411890 master-0 kubenswrapper[8244]: E0318 10:10:33.411735 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6716938-ca14-4000-b7f1-b60e93e93c0d" containerName="installer" Mar 18 10:10:33.411890 master-0 kubenswrapper[8244]: I0318 10:10:33.411742 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6716938-ca14-4000-b7f1-b60e93e93c0d" containerName="installer" Mar 18 10:10:33.411890 master-0 kubenswrapper[8244]: E0318 10:10:33.411755 8244 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 10:10:33.411890 master-0 kubenswrapper[8244]: I0318 10:10:33.411763 8244 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 10:10:33.414726 master-0 kubenswrapper[8244]: I0318 10:10:33.411937 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcf01f63-ed66-4f0d-b2df-97c77bbf8543" containerName="installer" Mar 18 10:10:33.414726 master-0 kubenswrapper[8244]: I0318 10:10:33.411956 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 10:10:33.414726 master-0 kubenswrapper[8244]: I0318 10:10:33.411986 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 10:10:33.414726 master-0 kubenswrapper[8244]: I0318 10:10:33.411996 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6716938-ca14-4000-b7f1-b60e93e93c0d" containerName="installer" Mar 18 10:10:33.414726 master-0 kubenswrapper[8244]: I0318 10:10:33.412003 8244 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 10:10:33.419483 master-0 kubenswrapper[8244]: I0318 10:10:33.419433 8244 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 10:10:33.419911 master-0 kubenswrapper[8244]: I0318 10:10:33.419812 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.421887 master-0 kubenswrapper[8244]: I0318 10:10:33.421792 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.502106 master-0 kubenswrapper[8244]: I0318 10:10:33.501575 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.502106 master-0 kubenswrapper[8244]: I0318 10:10:33.501627 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.502106 master-0 kubenswrapper[8244]: I0318 10:10:33.501651 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.502106 master-0 kubenswrapper[8244]: I0318 10:10:33.501753 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.502106 master-0 kubenswrapper[8244]: I0318 10:10:33.501788 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.502106 master-0 kubenswrapper[8244]: I0318 10:10:33.501806 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.502106 master-0 kubenswrapper[8244]: I0318 10:10:33.501903 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.502106 master-0 kubenswrapper[8244]: I0318 10:10:33.501923 8244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.509075 master-0 kubenswrapper[8244]: E0318 10:10:33.509009 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.528952 master-0 kubenswrapper[8244]: E0318 10:10:33.528829 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.561109 master-0 kubenswrapper[8244]: I0318 10:10:33.561041 8244 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="f94e501b0ad12236c03bc538f983952a18a8058deb0777210379742bce193fde" exitCode=0 Mar 18 10:10:33.563916 master-0 kubenswrapper[8244]: I0318 10:10:33.563884 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97"} Mar 18 10:10:33.564021 master-0 kubenswrapper[8244]: I0318 10:10:33.563921 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530"} Mar 18 10:10:33.565594 master-0 kubenswrapper[8244]: I0318 10:10:33.565527 8244 status_manager.go:851] "Failed to get status for pod" podUID="3ddfa5bb627414042dcc2d2204092c5a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:33.591413 master-0 kubenswrapper[8244]: E0318 10:10:33.591340 8244 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-f94e501b0ad12236c03bc538f983952a18a8058deb0777210379742bce193fde.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-conmon-f94e501b0ad12236c03bc538f983952a18a8058deb0777210379742bce193fde.scope\": RecentStats: unable to find data in memory cache]" Mar 18 10:10:33.602965 master-0 kubenswrapper[8244]: I0318 10:10:33.602908 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603069 master-0 kubenswrapper[8244]: I0318 10:10:33.602979 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.603122 master-0 kubenswrapper[8244]: I0318 10:10:33.603052 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603166 master-0 kubenswrapper[8244]: I0318 10:10:33.603131 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603298 master-0 kubenswrapper[8244]: I0318 10:10:33.603265 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.603298 master-0 kubenswrapper[8244]: I0318 10:10:33.603294 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.603387 master-0 kubenswrapper[8244]: I0318 10:10:33.603339 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603387 master-0 kubenswrapper[8244]: I0318 10:10:33.603384 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603476 master-0 kubenswrapper[8244]: I0318 10:10:33.603406 8244 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603561 master-0 kubenswrapper[8244]: I0318 10:10:33.603533 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603659 master-0 kubenswrapper[8244]: I0318 10:10:33.603576 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603659 master-0 kubenswrapper[8244]: I0318 10:10:33.603577 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603757 master-0 kubenswrapper[8244]: I0318 10:10:33.603671 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:33.603757 master-0 kubenswrapper[8244]: I0318 10:10:33.603720 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.603873 master-0 kubenswrapper[8244]: I0318 10:10:33.603800 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.603935 master-0 kubenswrapper[8244]: I0318 10:10:33.603870 8244 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.732694 master-0 kubenswrapper[8244]: I0318 10:10:33.732504 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:33.741169 master-0 kubenswrapper[8244]: I0318 10:10:33.741063 8244 status_manager.go:851] "Failed to get status for pod" podUID="3ddfa5bb627414042dcc2d2204092c5a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:33.742419 master-0 kubenswrapper[8244]: I0318 10:10:33.742343 8244 status_manager.go:851] "Failed to get status for pod" podUID="3ddfa5bb627414042dcc2d2204092c5a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:33.772476 master-0 kubenswrapper[8244]: I0318 10:10:33.772372 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8f17c553-e707-4d8d-bd31-e8f28f3898bb" Mar 18 10:10:33.772476 master-0 kubenswrapper[8244]: I0318 10:10:33.772441 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8f17c553-e707-4d8d-bd31-e8f28f3898bb" Mar 18 10:10:33.773662 master-0 kubenswrapper[8244]: E0318 10:10:33.773567 8244 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:33.774417 master-0 kubenswrapper[8244]: I0318 10:10:33.774377 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:33.802990 master-0 kubenswrapper[8244]: W0318 10:10:33.802914 8244 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11a2f93448b9d54da9854663936e2b73.slice/crio-06e9465576405f83c2377274bbe7c9f80c7e1d2afadf9ee173551a2f7f95d786 WatchSource:0}: Error finding container 06e9465576405f83c2377274bbe7c9f80c7e1d2afadf9ee173551a2f7f95d786: Status 404 returned error can't find the container with id 06e9465576405f83c2377274bbe7c9f80c7e1d2afadf9ee173551a2f7f95d786 Mar 18 10:10:33.808182 master-0 kubenswrapper[8244]: E0318 10:10:33.807901 8244 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-kube-scheduler-master-0.189de7be8a7062d9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-master-0,UID:11a2f93448b9d54da9854663936e2b73,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 10:10:33.806267097 +0000 UTC m=+950.286003255,LastTimestamp:2026-03-18 10:10:33.806267097 +0000 UTC m=+950.286003255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 10:10:33.809788 master-0 kubenswrapper[8244]: I0318 10:10:33.809746 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:33.829763 master-0 kubenswrapper[8244]: I0318 10:10:33.829721 8244 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:34.402211 master-0 kubenswrapper[8244]: I0318 10:10:34.402145 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:34.402211 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:34.402211 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:34.402211 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:34.402598 master-0 kubenswrapper[8244]: I0318 10:10:34.402240 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:34.576303 master-0 kubenswrapper[8244]: I0318 10:10:34.576208 8244 generic.go:334] "Generic (PLEG): container finished" podID="a3657106-1eea-4031-8c92-85ba6287b425" containerID="06c0be19470a9053df1e868da4f3dfc9b3f3db58cf48affc02d1dbbb79a51995" exitCode=0 Mar 18 10:10:34.577319 master-0 kubenswrapper[8244]: I0318 10:10:34.576321 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" event={"ID":"a3657106-1eea-4031-8c92-85ba6287b425","Type":"ContainerDied","Data":"06c0be19470a9053df1e868da4f3dfc9b3f3db58cf48affc02d1dbbb79a51995"} Mar 18 10:10:34.578145 master-0 kubenswrapper[8244]: I0318 10:10:34.578074 8244 status_manager.go:851] "Failed to get status for pod" podUID="a3657106-1eea-4031-8c92-85ba6287b425" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:34.578914 master-0 kubenswrapper[8244]: I0318 10:10:34.578861 8244 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289" exitCode=0 Mar 18 10:10:34.579090 master-0 kubenswrapper[8244]: I0318 10:10:34.578969 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerDied","Data":"51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289"} Mar 18 10:10:34.579090 master-0 kubenswrapper[8244]: I0318 10:10:34.579027 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"187a5eb02f6d39f4d5d17d569f5578af7e87c01c9503e828b0f618e0f62581eb"} Mar 18 10:10:34.579687 master-0 kubenswrapper[8244]: I0318 10:10:34.579399 8244 status_manager.go:851] "Failed to get status for pod" podUID="3ddfa5bb627414042dcc2d2204092c5a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:34.580711 master-0 kubenswrapper[8244]: I0318 10:10:34.580668 8244 status_manager.go:851] "Failed to get status for pod" podUID="3ddfa5bb627414042dcc2d2204092c5a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:34.580890 master-0 kubenswrapper[8244]: E0318 10:10:34.580710 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:34.581896 master-0 kubenswrapper[8244]: I0318 10:10:34.581707 8244 status_manager.go:851] "Failed to get status for pod" podUID="a3657106-1eea-4031-8c92-85ba6287b425" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:34.582053 master-0 kubenswrapper[8244]: I0318 10:10:34.581990 8244 generic.go:334] "Generic (PLEG): container finished" podID="11a2f93448b9d54da9854663936e2b73" containerID="dbf2586f3189d0b8f9dc638d92901a45e6cf3cdbf23cf4bd198e6fe898ec14b2" exitCode=0 Mar 18 10:10:34.582130 master-0 kubenswrapper[8244]: I0318 10:10:34.582045 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerDied","Data":"dbf2586f3189d0b8f9dc638d92901a45e6cf3cdbf23cf4bd198e6fe898ec14b2"} Mar 18 10:10:34.582130 master-0 kubenswrapper[8244]: I0318 10:10:34.582086 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"06e9465576405f83c2377274bbe7c9f80c7e1d2afadf9ee173551a2f7f95d786"} Mar 18 10:10:34.582736 master-0 kubenswrapper[8244]: I0318 10:10:34.582675 8244 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8f17c553-e707-4d8d-bd31-e8f28f3898bb" Mar 18 10:10:34.582736 master-0 kubenswrapper[8244]: I0318 10:10:34.582724 8244 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8f17c553-e707-4d8d-bd31-e8f28f3898bb" Mar 18 10:10:34.584223 master-0 kubenswrapper[8244]: I0318 10:10:34.583464 8244 status_manager.go:851] "Failed to get status for pod" podUID="a3657106-1eea-4031-8c92-85ba6287b425" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:34.584223 master-0 kubenswrapper[8244]: E0318 10:10:34.583482 8244 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:34.584400 master-0 kubenswrapper[8244]: I0318 10:10:34.584366 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"66dba26b707d8a7ef9a56c2e052eb81cdb6a21e228ccc4ca178ec7f65804ffae"} Mar 18 10:10:34.584562 master-0 kubenswrapper[8244]: I0318 10:10:34.584413 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"03355a5e2caa4496c4b10efd4243dd60c302d54b340a80972ebe3e5661f0dd6b"} Mar 18 10:10:34.586728 master-0 kubenswrapper[8244]: E0318 10:10:34.585860 8244 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:34.586728 master-0 kubenswrapper[8244]: I0318 10:10:34.585952 8244 status_manager.go:851] "Failed to get status for pod" podUID="3ddfa5bb627414042dcc2d2204092c5a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:34.587093 master-0 kubenswrapper[8244]: I0318 10:10:34.587028 8244 status_manager.go:851] "Failed to get status for pod" podUID="a3657106-1eea-4031-8c92-85ba6287b425" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:34.588123 master-0 kubenswrapper[8244]: I0318 10:10:34.587996 8244 status_manager.go:851] "Failed to get status for pod" podUID="3ddfa5bb627414042dcc2d2204092c5a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:10:35.400795 master-0 kubenswrapper[8244]: I0318 10:10:35.400727 8244 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:35.400795 master-0 kubenswrapper[8244]: [-]has-synced failed: reason withheld Mar 18 10:10:35.400795 master-0 kubenswrapper[8244]: [+]process-running ok Mar 18 10:10:35.400795 master-0 kubenswrapper[8244]: healthz check failed Mar 18 10:10:35.400795 master-0 kubenswrapper[8244]: I0318 10:10:35.400791 8244 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:35.606245 master-0 kubenswrapper[8244]: I0318 10:10:35.604869 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"346bcdf104d2ea10327572091843ffc672c87624551d190458c48063f43a2f22"} Mar 18 10:10:35.606245 master-0 kubenswrapper[8244]: I0318 10:10:35.604937 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"86ad2fe80dc58ccabdc7ba9d7e52d68245236d6e0eab6c192777c1cb03777ee6"} Mar 18 10:10:35.620482 master-0 kubenswrapper[8244]: I0318 10:10:35.620306 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb"} Mar 18 10:10:35.620482 master-0 kubenswrapper[8244]: I0318 10:10:35.620354 8244 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3"} Mar 18 10:10:35.747538 master-0 kubenswrapper[8244]: I0318 10:10:35.708583 8244 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 10:10:35.750603 master-0 kubenswrapper[8244]: I0318 10:10:35.750561 8244 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 10:10:35.750898 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 10:10:35.841473 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 10:10:35.841728 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 10:10:35.843161 master-0 systemd[1]: kubelet.service: Consumed 2min 25.553s CPU time. Mar 18 10:10:35.862534 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 10:10:35.997580 master-0 kubenswrapper[30420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 10:10:35.997580 master-0 kubenswrapper[30420]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 10:10:35.997580 master-0 kubenswrapper[30420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 10:10:35.997580 master-0 kubenswrapper[30420]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 10:10:35.997580 master-0 kubenswrapper[30420]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 10:10:35.997580 master-0 kubenswrapper[30420]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 10:10:35.999998 master-0 kubenswrapper[30420]: I0318 10:10:35.998074 30420 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 10:10:36.002203 master-0 kubenswrapper[30420]: W0318 10:10:36.002176 30420 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 10:10:36.002203 master-0 kubenswrapper[30420]: W0318 10:10:36.002194 30420 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 10:10:36.002203 master-0 kubenswrapper[30420]: W0318 10:10:36.002200 30420 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 10:10:36.002203 master-0 kubenswrapper[30420]: W0318 10:10:36.002206 30420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 10:10:36.002203 master-0 kubenswrapper[30420]: W0318 10:10:36.002210 30420 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002215 30420 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002219 30420 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002223 30420 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002227 30420 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002232 30420 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002235 30420 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002239 30420 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002243 30420 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002246 30420 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002250 30420 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002254 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002258 30420 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002262 30420 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002265 30420 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002273 30420 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002277 30420 feature_gate.go:330] unrecognized feature gate: Example Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002280 30420 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002284 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002288 30420 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 10:10:36.002360 master-0 kubenswrapper[30420]: W0318 10:10:36.002293 30420 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002296 30420 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002300 30420 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002304 30420 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002307 30420 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002312 30420 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002315 30420 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002320 30420 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002325 30420 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002330 30420 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002334 30420 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002338 30420 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002342 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002346 30420 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002350 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002353 30420 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002357 30420 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002361 30420 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002366 30420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 10:10:36.002928 master-0 kubenswrapper[30420]: W0318 10:10:36.002370 30420 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002373 30420 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002377 30420 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002381 30420 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002385 30420 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002390 30420 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002393 30420 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002397 30420 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002400 30420 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002405 30420 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002409 30420 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002421 30420 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002425 30420 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002429 30420 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002432 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002436 30420 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002440 30420 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002443 30420 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002446 30420 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002450 30420 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 10:10:36.003420 master-0 kubenswrapper[30420]: W0318 10:10:36.002454 30420 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: W0318 10:10:36.002458 30420 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: W0318 10:10:36.002461 30420 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: W0318 10:10:36.002465 30420 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: W0318 10:10:36.002468 30420 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: W0318 10:10:36.002472 30420 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: W0318 10:10:36.002475 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: W0318 10:10:36.002479 30420 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: W0318 10:10:36.002483 30420 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002591 30420 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002599 30420 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002606 30420 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002612 30420 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002618 30420 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002623 30420 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002629 30420 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002634 30420 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002638 30420 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002643 30420 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002648 30420 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002652 30420 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002660 30420 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 10:10:36.003974 master-0 kubenswrapper[30420]: I0318 10:10:36.002665 30420 flags.go:64] FLAG: --cgroup-root="" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002669 30420 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002673 30420 flags.go:64] FLAG: --client-ca-file="" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002677 30420 flags.go:64] FLAG: --cloud-config="" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002681 30420 flags.go:64] FLAG: --cloud-provider="" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002686 30420 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002691 30420 flags.go:64] FLAG: --cluster-domain="" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002695 30420 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002700 30420 flags.go:64] FLAG: --config-dir="" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002704 30420 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002709 30420 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002714 30420 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002718 30420 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002723 30420 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002727 30420 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002731 30420 flags.go:64] FLAG: --contention-profiling="false" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002736 30420 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002740 30420 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002745 30420 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002749 30420 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002755 30420 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002760 30420 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002764 30420 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002768 30420 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002772 30420 flags.go:64] FLAG: --enable-server="true" Mar 18 10:10:36.004530 master-0 kubenswrapper[30420]: I0318 10:10:36.002777 30420 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002782 30420 flags.go:64] FLAG: --event-burst="100" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002786 30420 flags.go:64] FLAG: --event-qps="50" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002791 30420 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002795 30420 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002800 30420 flags.go:64] FLAG: --eviction-hard="" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002805 30420 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002813 30420 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002818 30420 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002825 30420 flags.go:64] FLAG: --eviction-soft="" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002876 30420 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002881 30420 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002886 30420 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002890 30420 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002895 30420 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002899 30420 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002903 30420 flags.go:64] FLAG: --feature-gates="" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002909 30420 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002913 30420 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002917 30420 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002922 30420 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002928 30420 flags.go:64] FLAG: --healthz-port="10248" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002933 30420 flags.go:64] FLAG: --help="false" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002938 30420 flags.go:64] FLAG: --hostname-override="" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002942 30420 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002947 30420 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 10:10:36.007026 master-0 kubenswrapper[30420]: I0318 10:10:36.002952 30420 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002956 30420 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002960 30420 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002964 30420 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002969 30420 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002973 30420 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002977 30420 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002982 30420 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002986 30420 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002990 30420 flags.go:64] FLAG: --kube-reserved="" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002994 30420 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.002998 30420 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003003 30420 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003008 30420 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003013 30420 flags.go:64] FLAG: --lock-file="" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003017 30420 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003021 30420 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003026 30420 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003032 30420 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003036 30420 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003041 30420 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003045 30420 flags.go:64] FLAG: --logging-format="text" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003050 30420 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003054 30420 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003059 30420 flags.go:64] FLAG: --manifest-url="" Mar 18 10:10:36.007776 master-0 kubenswrapper[30420]: I0318 10:10:36.003063 30420 flags.go:64] FLAG: --manifest-url-header="" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003072 30420 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003076 30420 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003081 30420 flags.go:64] FLAG: --max-pods="110" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003086 30420 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003090 30420 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003094 30420 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003099 30420 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003103 30420 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003108 30420 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003112 30420 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003122 30420 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003126 30420 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003131 30420 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003135 30420 flags.go:64] FLAG: --pod-cidr="" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003140 30420 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003146 30420 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003152 30420 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003156 30420 flags.go:64] FLAG: --pods-per-core="0" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003160 30420 flags.go:64] FLAG: --port="10250" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003165 30420 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003169 30420 flags.go:64] FLAG: --provider-id="" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003173 30420 flags.go:64] FLAG: --qos-reserved="" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003178 30420 flags.go:64] FLAG: --read-only-port="10255" Mar 18 10:10:36.008566 master-0 kubenswrapper[30420]: I0318 10:10:36.003182 30420 flags.go:64] FLAG: --register-node="true" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003186 30420 flags.go:64] FLAG: --register-schedulable="true" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003190 30420 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003197 30420 flags.go:64] FLAG: --registry-burst="10" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003202 30420 flags.go:64] FLAG: --registry-qps="5" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003206 30420 flags.go:64] FLAG: --reserved-cpus="" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003211 30420 flags.go:64] FLAG: --reserved-memory="" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003217 30420 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003221 30420 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003225 30420 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003230 30420 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003234 30420 flags.go:64] FLAG: --runonce="false" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003238 30420 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003242 30420 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003247 30420 flags.go:64] FLAG: --seccomp-default="false" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003251 30420 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003255 30420 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003259 30420 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003264 30420 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003268 30420 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003273 30420 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003277 30420 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003281 30420 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003291 30420 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003295 30420 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 10:10:36.011741 master-0 kubenswrapper[30420]: I0318 10:10:36.003300 30420 flags.go:64] FLAG: --system-cgroups="" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003304 30420 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003310 30420 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003315 30420 flags.go:64] FLAG: --tls-cert-file="" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003319 30420 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003325 30420 flags.go:64] FLAG: --tls-min-version="" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003329 30420 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003333 30420 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003337 30420 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003341 30420 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003346 30420 flags.go:64] FLAG: --v="2" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003352 30420 flags.go:64] FLAG: --version="false" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003358 30420 flags.go:64] FLAG: --vmodule="" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003363 30420 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: I0318 10:10:36.003368 30420 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: W0318 10:10:36.003466 30420 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: W0318 10:10:36.003472 30420 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: W0318 10:10:36.003477 30420 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: W0318 10:10:36.003482 30420 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: W0318 10:10:36.003487 30420 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: W0318 10:10:36.003492 30420 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: W0318 10:10:36.003497 30420 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 10:10:36.012422 master-0 kubenswrapper[30420]: W0318 10:10:36.003501 30420 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003505 30420 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003509 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003513 30420 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003517 30420 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003521 30420 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003525 30420 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003529 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003533 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003537 30420 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003541 30420 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003545 30420 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003548 30420 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003552 30420 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003556 30420 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003560 30420 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003563 30420 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003567 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003571 30420 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 10:10:36.013018 master-0 kubenswrapper[30420]: W0318 10:10:36.003574 30420 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003580 30420 feature_gate.go:330] unrecognized feature gate: Example Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003584 30420 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003588 30420 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003592 30420 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003596 30420 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003600 30420 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003604 30420 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003607 30420 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003611 30420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003615 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003619 30420 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003622 30420 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003627 30420 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003630 30420 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003634 30420 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003639 30420 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003644 30420 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003648 30420 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003652 30420 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 10:10:36.017203 master-0 kubenswrapper[30420]: W0318 10:10:36.003656 30420 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003660 30420 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003663 30420 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003667 30420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003671 30420 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003674 30420 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003678 30420 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003682 30420 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003686 30420 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003691 30420 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003695 30420 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003699 30420 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003703 30420 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003709 30420 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003713 30420 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003716 30420 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003720 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003724 30420 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003727 30420 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003731 30420 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 10:10:36.018008 master-0 kubenswrapper[30420]: W0318 10:10:36.003734 30420 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 10:10:36.018743 master-0 kubenswrapper[30420]: W0318 10:10:36.003738 30420 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 10:10:36.018743 master-0 kubenswrapper[30420]: W0318 10:10:36.003742 30420 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 10:10:36.018743 master-0 kubenswrapper[30420]: W0318 10:10:36.003745 30420 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 10:10:36.018743 master-0 kubenswrapper[30420]: W0318 10:10:36.003749 30420 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 10:10:36.018743 master-0 kubenswrapper[30420]: W0318 10:10:36.003752 30420 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 10:10:36.018743 master-0 kubenswrapper[30420]: I0318 10:10:36.003758 30420 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 10:10:36.020272 master-0 kubenswrapper[30420]: I0318 10:10:36.020218 30420 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 10:10:36.020272 master-0 kubenswrapper[30420]: I0318 10:10:36.020262 30420 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 10:10:36.020450 master-0 kubenswrapper[30420]: W0318 10:10:36.020417 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 10:10:36.020450 master-0 kubenswrapper[30420]: W0318 10:10:36.020438 30420 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 10:10:36.020450 master-0 kubenswrapper[30420]: W0318 10:10:36.020445 30420 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 10:10:36.020450 master-0 kubenswrapper[30420]: W0318 10:10:36.020452 30420 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020458 30420 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020465 30420 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020470 30420 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020478 30420 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020483 30420 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020488 30420 feature_gate.go:330] unrecognized feature gate: Example Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020493 30420 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020498 30420 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020503 30420 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020509 30420 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020516 30420 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020523 30420 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020528 30420 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020533 30420 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020538 30420 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020547 30420 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020553 30420 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020557 30420 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020562 30420 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 10:10:36.020619 master-0 kubenswrapper[30420]: W0318 10:10:36.020567 30420 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020571 30420 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020576 30420 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020581 30420 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020586 30420 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020593 30420 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020604 30420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020610 30420 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020617 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020627 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020633 30420 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020640 30420 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020646 30420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020651 30420 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020657 30420 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020662 30420 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020667 30420 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020672 30420 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020678 30420 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 10:10:36.021383 master-0 kubenswrapper[30420]: W0318 10:10:36.020685 30420 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020690 30420 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020700 30420 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020706 30420 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020711 30420 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020716 30420 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020722 30420 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020727 30420 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020732 30420 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020737 30420 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020742 30420 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020747 30420 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020752 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020757 30420 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020765 30420 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020774 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020780 30420 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020784 30420 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020789 30420 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020794 30420 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 10:10:36.022127 master-0 kubenswrapper[30420]: W0318 10:10:36.020799 30420 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.020803 30420 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.020808 30420 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.020814 30420 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.020819 30420 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.020923 30420 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.020932 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.021142 30420 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.021148 30420 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.021153 30420 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: I0318 10:10:36.021163 30420 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.021316 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.021328 30420 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.021336 30420 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.021343 30420 feature_gate.go:330] unrecognized feature gate: Example Mar 18 10:10:36.023034 master-0 kubenswrapper[30420]: W0318 10:10:36.021348 30420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021353 30420 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021358 30420 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021363 30420 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021368 30420 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021373 30420 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021384 30420 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021390 30420 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021398 30420 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021404 30420 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021409 30420 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021416 30420 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021421 30420 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021426 30420 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021431 30420 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021435 30420 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021440 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021445 30420 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021450 30420 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021456 30420 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 10:10:36.023576 master-0 kubenswrapper[30420]: W0318 10:10:36.021461 30420 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021466 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021470 30420 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021475 30420 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021479 30420 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021485 30420 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021490 30420 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021495 30420 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021499 30420 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021504 30420 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021509 30420 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021514 30420 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021518 30420 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021523 30420 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021527 30420 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021532 30420 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021537 30420 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021541 30420 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021549 30420 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021554 30420 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 10:10:36.024363 master-0 kubenswrapper[30420]: W0318 10:10:36.021558 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021565 30420 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021571 30420 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021577 30420 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021583 30420 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021588 30420 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021593 30420 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021597 30420 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021602 30420 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021608 30420 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021614 30420 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021620 30420 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021625 30420 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021629 30420 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021635 30420 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021640 30420 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021645 30420 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021650 30420 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021655 30420 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 10:10:36.025421 master-0 kubenswrapper[30420]: W0318 10:10:36.021660 30420 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: W0318 10:10:36.021665 30420 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: W0318 10:10:36.021670 30420 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: W0318 10:10:36.021674 30420 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: W0318 10:10:36.021679 30420 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: W0318 10:10:36.021684 30420 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: W0318 10:10:36.021690 30420 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: W0318 10:10:36.021695 30420 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: W0318 10:10:36.021702 30420 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: I0318 10:10:36.021710 30420 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: I0318 10:10:36.021928 30420 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: I0318 10:10:36.023575 30420 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: I0318 10:10:36.023650 30420 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: I0318 10:10:36.023882 30420 server.go:997] "Starting client certificate rotation" Mar 18 10:10:36.026293 master-0 kubenswrapper[30420]: I0318 10:10:36.023893 30420 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 10:10:36.027337 master-0 kubenswrapper[30420]: I0318 10:10:36.027295 30420 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 10:10:36.027859 master-0 kubenswrapper[30420]: I0318 10:10:36.024098 30420 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 09:43:17 +0000 UTC, rotation deadline is 2026-03-19 05:17:47.523061758 +0000 UTC Mar 18 10:10:36.027859 master-0 kubenswrapper[30420]: I0318 10:10:36.027848 30420 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h7m11.495220453s for next certificate rotation Mar 18 10:10:36.028923 master-0 kubenswrapper[30420]: I0318 10:10:36.028782 30420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 10:10:36.030920 master-0 kubenswrapper[30420]: I0318 10:10:36.030798 30420 log.go:25] "Validated CRI v1 runtime API" Mar 18 10:10:36.034708 master-0 kubenswrapper[30420]: I0318 10:10:36.034676 30420 log.go:25] "Validated CRI v1 image API" Mar 18 10:10:36.037904 master-0 kubenswrapper[30420]: I0318 10:10:36.037072 30420 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 10:10:36.048362 master-0 kubenswrapper[30420]: I0318 10:10:36.048286 30420 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 b6f69005-7b27-4e50-b235-73833be75bbb:/dev/vda3] Mar 18 10:10:36.051577 master-0 kubenswrapper[30420]: I0318 10:10:36.048337 30420 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/00431ec658bea7a97a4c1df198c67f87ad4685fb77cc89ae90150ff213743316/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/00431ec658bea7a97a4c1df198c67f87ad4685fb77cc89ae90150ff213743316/userdata/shm major:0 minor:578 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/02d02240944e9230fa342b4b1030eceabc9b6ad789e1383eef1d657905cf15af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/02d02240944e9230fa342b4b1030eceabc9b6ad789e1383eef1d657905cf15af/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/03355a5e2caa4496c4b10efd4243dd60c302d54b340a80972ebe3e5661f0dd6b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/03355a5e2caa4496c4b10efd4243dd60c302d54b340a80972ebe3e5661f0dd6b/userdata/shm major:0 minor:349 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/03c65d78c2c86aff78c560583deceefc749227ea76cab522d93c1dd2064cc015/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/03c65d78c2c86aff78c560583deceefc749227ea76cab522d93c1dd2064cc015/userdata/shm major:0 minor:520 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06e9465576405f83c2377274bbe7c9f80c7e1d2afadf9ee173551a2f7f95d786/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06e9465576405f83c2377274bbe7c9f80c7e1d2afadf9ee173551a2f7f95d786/userdata/shm major:0 minor:79 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0a14d09c0c63bc07a9e3f986358b6bbfe11d33fdfadd6b5aba6cb62ef0a527b0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0a14d09c0c63bc07a9e3f986358b6bbfe11d33fdfadd6b5aba6cb62ef0a527b0/userdata/shm major:0 minor:513 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0a709f6a031857e3e4e56dda2c8a6cf2ebbad7bd036491c8c8d4d7ae887efd7b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0a709f6a031857e3e4e56dda2c8a6cf2ebbad7bd036491c8c8d4d7ae887efd7b/userdata/shm major:0 minor:1102 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0ab9786ebf50a65e9432d654c3f52392db8e881a65fb26e7e3e002f1d0577eeb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0ab9786ebf50a65e9432d654c3f52392db8e881a65fb26e7e3e002f1d0577eeb/userdata/shm major:0 minor:354 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0d84a97391b20bbc1473efdc91b70735c4232a35d2754651bb0243ebf80ab3be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0d84a97391b20bbc1473efdc91b70735c4232a35d2754651bb0243ebf80ab3be/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/13ead1a9d130e4cdb9a3e1038d5bbe3813860bfedd951bc71fd7108de36c6c88/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/13ead1a9d130e4cdb9a3e1038d5bbe3813860bfedd951bc71fd7108de36c6c88/userdata/shm major:0 minor:444 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/15afbeaf2b91c3dde6de78ecc76cf185217127e7fd54f971970a9dc91ec72267/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/15afbeaf2b91c3dde6de78ecc76cf185217127e7fd54f971970a9dc91ec72267/userdata/shm major:0 minor:1080 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/187a5eb02f6d39f4d5d17d569f5578af7e87c01c9503e828b0f618e0f62581eb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/187a5eb02f6d39f4d5d17d569f5578af7e87c01c9503e828b0f618e0f62581eb/userdata/shm major:0 minor:320 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a4ce30442f41beafbbdf0d0fcad6e463a305b377720e6060de4d2e923ec7031/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a4ce30442f41beafbbdf0d0fcad6e463a305b377720e6060de4d2e923ec7031/userdata/shm major:0 minor:1030 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1b4d46c0a582fa8416fadc519a245d9a05f81263579189dfddab63cae5612499/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1b4d46c0a582fa8416fadc519a245d9a05f81263579189dfddab63cae5612499/userdata/shm major:0 minor:514 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d7c06dbc8e2f887f2a21bc3e179a21693ddc1835812120917fd3ac94d4f0ff2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d7c06dbc8e2f887f2a21bc3e179a21693ddc1835812120917fd3ac94d4f0ff2/userdata/shm major:0 minor:589 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ddd0ca0bee2bbed601ee28c1df5999ea68981b20d1c0067b52437a2649e11aa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ddd0ca0bee2bbed601ee28c1df5999ea68981b20d1c0067b52437a2649e11aa/userdata/shm major:0 minor:454 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2108f9b19bef72325cf7ce6838f94c4d93335d1acb2849349c2da5bf81571c7d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2108f9b19bef72325cf7ce6838f94c4d93335d1acb2849349c2da5bf81571c7d/userdata/shm major:0 minor:588 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/274e9b834559b126c9207a26c34fb18f9b1812e69065a033951f8808dc379847/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/274e9b834559b126c9207a26c34fb18f9b1812e69065a033951f8808dc379847/userdata/shm major:0 minor:437 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/296c63b9a082d2c4952a03261f6f9afd9282d74bb23ca7de387e35c413bd5177/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/296c63b9a082d2c4952a03261f6f9afd9282d74bb23ca7de387e35c413bd5177/userdata/shm major:0 minor:657 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b738d6ab8a2079028f3f1e5804df92e50d8884090bb1653ec14e4d63a6afccd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b738d6ab8a2079028f3f1e5804df92e50d8884090bb1653ec14e4d63a6afccd/userdata/shm major:0 minor:361 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2cf1bdb8eb09b95692725959e60306272582dc358e1d2a541fe6b5b5e57971c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2cf1bdb8eb09b95692725959e60306272582dc358e1d2a541fe6b5b5e57971c0/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2cf9d5a318f253e886267d57345deb8cc4469309552817e3d629697b159e40e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2cf9d5a318f253e886267d57345deb8cc4469309552817e3d629697b159e40e7/userdata/shm major:0 minor:600 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e6eabf2087e36d3613240f79a61ceca615c772d05baa285322d88bd80a44773/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e6eabf2087e36d3613240f79a61ceca615c772d05baa285322d88bd80a44773/userdata/shm major:0 minor:1028 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/308f045ad48f29df3fbed5a202a7ccbbb9fcab711591e6a10e9dfffd40505d42/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/308f045ad48f29df3fbed5a202a7ccbbb9fcab711591e6a10e9dfffd40505d42/userdata/shm major:0 minor:802 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3acdf5b69c1ce66294030ac402e9c8e09366d47522c5ff94a22e2363f49e4024/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3acdf5b69c1ce66294030ac402e9c8e09366d47522c5ff94a22e2363f49e4024/userdata/shm major:0 minor:782 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3fdec4aed0d4d1e92fcea54e18530bddc4ceb0a577b38a5b2728e046e7e0d8a1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3fdec4aed0d4d1e92fcea54e18530bddc4ceb0a577b38a5b2728e046e7e0d8a1/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/481a20c56b1513a6550470d25ece05987dc0ad3be0f23f19f26b6d5a7a36ce42/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/481a20c56b1513a6550470d25ece05987dc0ad3be0f23f19f26b6d5a7a36ce42/userdata/shm major:0 minor:830 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/543fb2147aca575376ed7bd211cfca3f8a0e31f62df5e58bf47f4f7fc11fc303/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/543fb2147aca575376ed7bd211cfca3f8a0e31f62df5e58bf47f4f7fc11fc303/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5f264524ff7942903d23e39e84e002c2a4f349e860595476e5954b840e22c114/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5f264524ff7942903d23e39e84e002c2a4f349e860595476e5954b840e22c114/userdata/shm major:0 minor:1124 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/613533c3a19224e9e30dba35639ecd39810b8db2f7864917803baa176a7bbed0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/613533c3a19224e9e30dba35639ecd39810b8db2f7864917803baa176a7bbed0/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/61f6b81b92e4d6e8441e143173fb9e75d890f0b6176d5db04fc0f47c9e7e489a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/61f6b81b92e4d6e8441e143173fb9e75d890f0b6176d5db04fc0f47c9e7e489a/userdata/shm major:0 minor:591 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b37b06bafa3fe7617d0c4d370f2bc9e1e4e31111091703de1b10d8a3711bfba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b37b06bafa3fe7617d0c4d370f2bc9e1e4e31111091703de1b10d8a3711bfba/userdata/shm major:0 minor:605 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7483df25713a00b0ea8cbc4c6314a73f83bff54b160af6b49103c48fec6f8b1e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7483df25713a00b0ea8cbc4c6314a73f83bff54b160af6b49103c48fec6f8b1e/userdata/shm major:0 minor:864 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7f312c72332d1eca8944cf91ca9c1d896c13f62ea944da320c89182c0dd4ab06/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7f312c72332d1eca8944cf91ca9c1d896c13f62ea944da320c89182c0dd4ab06/userdata/shm major:0 minor:396 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84a3629f241ccd15c8649ba629b3be31e2785a3b2224bbe09e95e6dbad4b5613/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84a3629f241ccd15c8649ba629b3be31e2785a3b2224bbe09e95e6dbad4b5613/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/860dad91b3226c9023c3b60395b0ad953648fc93c4b425a376a5054813858ced/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/860dad91b3226c9023c3b60395b0ad953648fc93c4b425a376a5054813858ced/userdata/shm major:0 minor:593 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8e83e941e1bb6d2e2e4ed50989f8c4a7c436dc56c6018257d976ac9218210eba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8e83e941e1bb6d2e2e4ed50989f8c4a7c436dc56c6018257d976ac9218210eba/userdata/shm major:0 minor:1003 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f11956d88039b0b64ae7a326d73a1a29f38de2a62777ca3d744161f04878819/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f11956d88039b0b64ae7a326d73a1a29f38de2a62777ca3d744161f04878819/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/94d378b5868ac49c0d516b9285e21a09fb0d6dca212ba5b79072685e6b662578/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/94d378b5868ac49c0d516b9285e21a09fb0d6dca212ba5b79072685e6b662578/userdata/shm major:0 minor:355 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99f1238675e89d202ac72814030597ebf2c78d75d8dce9d24566f86cd13b327c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99f1238675e89d202ac72814030597ebf2c78d75d8dce9d24566f86cd13b327c/userdata/shm major:0 minor:888 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9a9d18e78a09ff29603fbd5fc9e03f2d3a2eb3c0cb4954994f17a7962e1ccc72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9a9d18e78a09ff29603fbd5fc9e03f2d3a2eb3c0cb4954994f17a7962e1ccc72/userdata/shm major:0 minor:596 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c1ce07b6c7993e6988dcb73b0d0ae149fc17c7c6fa96dc548353a31db24514c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c1ce07b6c7993e6988dcb73b0d0ae149fc17c7c6fa96dc548353a31db24514c/userdata/shm major:0 minor:438 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9dc4baf2ee903f66ceacf214f401bab7bc4c01b6dec665d83f3584b31ae00f41/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9dc4baf2ee903f66ceacf214f401bab7bc4c01b6dec665d83f3584b31ae00f41/userdata/shm major:0 minor:64 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9ecbe775d85b5008c6adeeb8170b86d61ae88bf900fcd70723b66300a47bcaec/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9ecbe775d85b5008c6adeeb8170b86d61ae88bf900fcd70723b66300a47bcaec/userdata/shm major:0 minor:817 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9fee5c93850116cedccb29b440cbb9d64b2e4cc6c4a2b7baa36f936fc07adce9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9fee5c93850116cedccb29b440cbb9d64b2e4cc6c4a2b7baa36f936fc07adce9/userdata/shm major:0 minor:364 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a0f6a23031d96231e99cbb9f2b16dea4d913c0ee0df84104c4f8c08579a04daa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a0f6a23031d96231e99cbb9f2b16dea4d913c0ee0df84104c4f8c08579a04daa/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a4b6c9bb5e1aa6ddb46f2ece42f31a363d888ffb22d8e2d50941005d7a91173e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a4b6c9bb5e1aa6ddb46f2ece42f31a363d888ffb22d8e2d50941005d7a91173e/userdata/shm major:0 minor:806 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a62338b3d8b6fefea0ba1a5636a4c5079225838e71c631e7514905926d40be01/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a62338b3d8b6fefea0ba1a5636a4c5079225838e71c631e7514905926d40be01/userdata/shm major:0 minor:820 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a70d40880058e84142e4d02963e7aba37e4a753a42ab982dbb781aba6c1199ec/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a70d40880058e84142e4d02963e7aba37e4a753a42ab982dbb781aba6c1199ec/userdata/shm major:0 minor:825 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a8685da7c022ead7819bc14f1d28e93a2c0d8bd27bb5dc325c78a31a740e3f59/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a8685da7c022ead7819bc14f1d28e93a2c0d8bd27bb5dc325c78a31a740e3f59/userdata/shm major:0 minor:1209 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ac57b9f21c66b05de1907050080a6922bfb455574d5cf2698b6bd4c95c6df165/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ac57b9f21c66b05de1907050080a6922bfb455574d5cf2698b6bd4c95c6df165/userdata/shm major:0 minor:1026 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/adda5560398a1e9cd1248ce8d3ae8608ee224ce0ee349c65f7682b313879aa78/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/adda5560398a1e9cd1248ce8d3ae8608ee224ce0ee349c65f7682b313879aa78/userdata/shm major:0 minor:1162 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b58497ff3c8993b13d6f045f9b3aa17b9b5e464305fd642acb69bc40d01db14a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b58497ff3c8993b13d6f045f9b3aa17b9b5e464305fd642acb69bc40d01db14a/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b588169f9714563a6db5379251857ae747425b95554009dbd48c296b2e82b297/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b588169f9714563a6db5379251857ae747425b95554009dbd48c296b2e82b297/userdata/shm major:0 minor:360 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b7ca349d109c7ce47be51e023fb21ab1709798444b4c309eab6316772a1ee596/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b7ca349d109c7ce47be51e023fb21ab1709798444b4c309eab6316772a1ee596/userdata/shm major:0 minor:1221 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c669ea9b66a51273cf2d30ced0d0c7e6bfc9166bf41cddcbf86ac434cad57ea6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c669ea9b66a51273cf2d30ced0d0c7e6bfc9166bf41cddcbf86ac434cad57ea6/userdata/shm major:0 minor:379 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cc6e82f62809390e77afef9a24511f8204b584c9c34f5174bf13a9f3c743fa58/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cc6e82f62809390e77afef9a24511f8204b584c9c34f5174bf13a9f3c743fa58/userdata/shm major:0 minor:511 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cc949f0d8f85c68fa457f1194d4c5e8aa9bf8a96548dfb4976d04f8be5a7a9b6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cc949f0d8f85c68fa457f1194d4c5e8aa9bf8a96548dfb4976d04f8be5a7a9b6/userdata/shm major:0 minor:352 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa/userdata/shm major:0 minor:137 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d7d862ef1259d0f32a24b080a794c178935b4f82b34bd652442b355adbe27b4c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d7d862ef1259d0f32a24b080a794c178935b4f82b34bd652442b355adbe27b4c/userdata/shm major:0 minor:841 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d9a9cd3f2878ec84a255f5f74dc3526f3a1623550d44547c9ce47a07a51bb959/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d9a9cd3f2878ec84a255f5f74dc3526f3a1623550d44547c9ce47a07a51bb959/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da42cce599588e6c99d4cd2839a25bf8a6c6ba9dc794e5b75cfaceda627f492b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da42cce599588e6c99d4cd2839a25bf8a6c6ba9dc794e5b75cfaceda627f492b/userdata/shm major:0 minor:822 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc23eb8c4f8df6172dfca6b7df2e710cff8ef0d5f4a2b6bc29af4b8dd83114fe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc23eb8c4f8df6172dfca6b7df2e710cff8ef0d5f4a2b6bc29af4b8dd83114fe/userdata/shm major:0 minor:832 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd0e307b59dcdef36339f9469bcea9ae60dc835b43a1e8b7190883e66520e662/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd0e307b59dcdef36339f9469bcea9ae60dc835b43a1e8b7190883e66520e662/userdata/shm major:0 minor:384 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dda9475997ae063330eb66def313ccd5f6f56fc68307fe940171e35bbbb378fc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dda9475997ae063330eb66def313ccd5f6f56fc68307fe940171e35bbbb378fc/userdata/shm major:0 minor:1068 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dfd0e7e42052e04911701599adae500aa7e091be93bca4bd99512045dd966402/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dfd0e7e42052e04911701599adae500aa7e091be93bca4bd99512045dd966402/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e277fb0b84dd045eb44f5a8337ca7f75f6577ad5f14ee5eacb1c176f0cf83dfa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e277fb0b84dd045eb44f5a8337ca7f75f6577ad5f14ee5eacb1c176f0cf83dfa/userdata/shm major:0 minor:913 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e702a6208830f572cc3b5f2ed7735679946a02e12d549d40a5020b7820cc5f46/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e702a6208830f572cc3b5f2ed7735679946a02e12d549d40a5020b7820cc5f46/userdata/shm major:0 minor:831 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ee46779ae89b4ca2573c0db3f08f40bcd1f36bd939f6b097aaa8ab0676c68690/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ee46779ae89b4ca2573c0db3f08f40bcd1f36bd939f6b097aaa8ab0676c68690/userdata/shm major:0 minor:250 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fc70fe385192b60cb00cc2ccd1eb9ea175a5eff153501a735cc786b1100d45a8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fc70fe385192b60cb00cc2ccd1eb9ea175a5eff153501a735cc786b1100d45a8/userdata/shm major:0 minor:1117 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fd600b9af2d2390bce62bac606740fc4a23373db916a45bc5361be1ed164fee1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fd600b9af2d2390bce62bac606740fc4a23373db916a45bc5361be1ed164fee1/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe35b5f7a2da5ebf4bbbee570d091e9d7b1840cb3252d65d0a8b082be7bbb647/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe35b5f7a2da5ebf4bbbee570d091e9d7b1840cb3252d65d0a8b082be7bbb647/userdata/shm major:0 minor:1034 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03de1ea6-da57-4e13-8e5a-d5e10a9f9957/volumes/kubernetes.io~projected/kube-api-access-hcj8f:{mountpoint:/var/lib/kubelet/pods/03de1ea6-da57-4e13-8e5a-d5e10a9f9957/volumes/kubernetes.io~projected/kube-api-access-hcj8f major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0442ec6c-5973-40a5-a0c3-dc02de46d343/volumes/kubernetes.io~projected/kube-api-access-5x6ht:{mountpoint:/var/lib/kubelet/pods/0442ec6c-5973-40a5-a0c3-dc02de46d343/volumes/kubernetes.io~projected/kube-api-access-5x6ht major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0442ec6c-5973-40a5-a0c3-dc02de46d343/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/0442ec6c-5973-40a5-a0c3-dc02de46d343/volumes/kubernetes.io~secret/metrics-certs major:0 minor:581 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/volumes/kubernetes.io~projected/ca-certs major:0 minor:576 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/volumes/kubernetes.io~projected/kube-api-access-kxl7x:{mountpoint:/var/lib/kubelet/pods/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/volumes/kubernetes.io~projected/kube-api-access-kxl7x major:0 minor:577 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:572 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0945a421-d7c4-46df-b3d9-507443627d51/volumes/kubernetes.io~projected/kube-api-access-k29kr:{mountpoint:/var/lib/kubelet/pods/0945a421-d7c4-46df-b3d9-507443627d51/volumes/kubernetes.io~projected/kube-api-access-k29kr major:0 minor:339 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~projected/kube-api-access major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~projected/kube-api-access-549bq:{mountpoint:/var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~projected/kube-api-access-549bq major:0 minor:509 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~secret/encryption-config major:0 minor:505 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~secret/etcd-client major:0 minor:506 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~secret/serving-cert major:0 minor:507 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~projected/kube-api-access-g6bvr:{mountpoint:/var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~projected/kube-api-access-g6bvr major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~projected/kube-api-access-9fjk8:{mountpoint:/var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~projected/kube-api-access-9fjk8 major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~projected/kube-api-access-fqx6m:{mountpoint:/var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~projected/kube-api-access-fqx6m major:0 minor:1161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1159 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1155 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1160 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1084562a-20a0-432d-b739-90bc0a4daff2/volumes/kubernetes.io~projected/kube-api-access-qmsjt:{mountpoint:/var/lib/kubelet/pods/1084562a-20a0-432d-b739-90bc0a4daff2/volumes/kubernetes.io~projected/kube-api-access-qmsjt major:0 minor:818 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1084562a-20a0-432d-b739-90bc0a4daff2/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/1084562a-20a0-432d-b739-90bc0a4daff2/volumes/kubernetes.io~secret/cert major:0 minor:814 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1084562a-20a0-432d-b739-90bc0a4daff2/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/1084562a-20a0-432d-b739-90bc0a4daff2/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:816 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/196e7607-1ddf-467b-9901-b4be746130a1/volumes/kubernetes.io~projected/kube-api-access-l4g9s:{mountpoint:/var/lib/kubelet/pods/196e7607-1ddf-467b-9901-b4be746130a1/volumes/kubernetes.io~projected/kube-api-access-l4g9s major:0 minor:1067 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/196e7607-1ddf-467b-9901-b4be746130a1/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/196e7607-1ddf-467b-9901-b4be746130a1/volumes/kubernetes.io~secret/certs major:0 minor:1059 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/196e7607-1ddf-467b-9901-b4be746130a1/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/196e7607-1ddf-467b-9901-b4be746130a1/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1058 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad4aa30-f7d5-47ca-b01e-2643f7195685/volumes/kubernetes.io~projected/kube-api-access-fp8vt:{mountpoint:/var/lib/kubelet/pods/1ad4aa30-f7d5-47ca-b01e-2643f7195685/volumes/kubernetes.io~projected/kube-api-access-fp8vt major:0 minor:799 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad4aa30-f7d5-47ca-b01e-2643f7195685/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/1ad4aa30-f7d5-47ca-b01e-2643f7195685/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:795 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7/volumes/kubernetes.io~projected/kube-api-access-wzzjs:{mountpoint:/var/lib/kubelet/pods/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7/volumes/kubernetes.io~projected/kube-api-access-wzzjs major:0 minor:340 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cb8ab19-0564-4182-a7e3-0943c1480663/volumes/kubernetes.io~projected/kube-api-access-4v8jq:{mountpoint:/var/lib/kubelet/pods/1cb8ab19-0564-4182-a7e3-0943c1480663/volumes/kubernetes.io~projected/kube-api-access-4v8jq major:0 minor:1100 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cb8ab19-0564-4182-a7e3-0943c1480663/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/1cb8ab19-0564-4182-a7e3-0943c1480663/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1098 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cb8ab19-0564-4182-a7e3-0943c1480663/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/1cb8ab19-0564-4182-a7e3-0943c1480663/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29490aed-9c97-42d1-94c8-44d1de13b70c/volumes/kubernetes.io~projected/kube-api-access-257hk:{mountpoint:/var/lib/kubelet/pods/29490aed-9c97-42d1-94c8-44d1de13b70c/volumes/kubernetes.io~projected/kube-api-access-257hk major:0 minor:811 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29490aed-9c97-42d1-94c8-44d1de13b70c/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/29490aed-9c97-42d1-94c8-44d1de13b70c/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:797 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29fbc78b-1887-40d4-8165-f0f7cc40b583/volumes/kubernetes.io~projected/kube-api-access-vm2nt:{mountpoint:/var/lib/kubelet/pods/29fbc78b-1887-40d4-8165-f0f7cc40b583/volumes/kubernetes.io~projected/kube-api-access-vm2nt major:0 minor:819 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29fbc78b-1887-40d4-8165-f0f7cc40b583/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/29fbc78b-1887-40d4-8165-f0f7cc40b583/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:815 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d014721-ed53-447a-b737-c496bbba18be/volumes/kubernetes.io~projected/kube-api-access-4btrk:{mountpoint:/var/lib/kubelet/pods/2d014721-ed53-447a-b737-c496bbba18be/volumes/kubernetes.io~projected/kube-api-access-4btrk major:0 minor:887 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d014721-ed53-447a-b737-c496bbba18be/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/2d014721-ed53-447a-b737-c496bbba18be/volumes/kubernetes.io~secret/proxy-tls major:0 minor:886 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0/volumes/kubernetes.io~projected/kube-api-access-gmxj9:{mountpoint:/var/lib/kubelet/pods/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0/volumes/kubernetes.io~projected/kube-api-access-gmxj9 major:0 minor:1002 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0/volumes/kubernetes.io~secret/proxy-tls major:0 minor:979 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~projected/kube-api-access-p5dk8:{mountpoint:/var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~projected/kube-api-access-p5dk8 major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~secret/serving-cert major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/432f611b-a1a2-4cc9-b005-17a16413d281/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/432f611b-a1a2-4cc9-b005-17a16413d281/volumes/kubernetes.io~projected/kube-api-access major:0 minor:465 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/432f611b-a1a2-4cc9-b005-17a16413d281/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/432f611b-a1a2-4cc9-b005-17a16413d281/volumes/kubernetes.io~secret/serving-cert major:0 minor:658 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~projected/kube-api-access-z459j:{mountpoint:/var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~projected/kube-api-access-z459j major:0 minor:1025 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~secret/default-certificate major:0 minor:1021 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1019 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~secret/stats-auth major:0 minor:1015 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/582d2ba8-1210-47d0-a530-0b20b2fdde22/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/582d2ba8-1210-47d0-a530-0b20b2fdde22/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1020 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5900a401-21c2-47f0-a921-47c648da558d/volumes/kubernetes.io~projected/kube-api-access-qtnxf:{mountpoint:/var/lib/kubelet/pods/5900a401-21c2-47f0-a921-47c648da558d/volumes/kubernetes.io~projected/kube-api-access-qtnxf major:0 minor:1099 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5900a401-21c2-47f0-a921-47c648da558d/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/5900a401-21c2-47f0-a921-47c648da558d/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1096 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5900a401-21c2-47f0-a921-47c648da558d/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/5900a401-21c2-47f0-a921-47c648da558d/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e/volumes/kubernetes.io~projected/kube-api-access-b46jq:{mountpoint:/var/lib/kubelet/pods/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e/volumes/kubernetes.io~projected/kube-api-access-b46jq major:0 minor:390 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e/volumes/kubernetes.io~secret/signing-key major:0 minor:336 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62b82d72-d73c-451a-84e1-551d73036aa8/volumes/kubernetes.io~projected/kube-api-access-lvnrf:{mountpoint:/var/lib/kubelet/pods/62b82d72-d73c-451a-84e1-551d73036aa8/volumes/kubernetes.io~projected/kube-api-access-lvnrf major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~projected/kube-api-access major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~secret/serving-cert major:0 minor:210 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f266bad-8b30-4300-ad93-9d48e61f2440/volumes/kubernetes.io~projected/kube-api-access-shbrj:{mountpoint:/var/lib/kubelet/pods/6f266bad-8b30-4300-ad93-9d48e61f2440/volumes/kubernetes.io~projected/kube-api-access-shbrj major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f266bad-8b30-4300-ad93-9d48e61f2440/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/6f266bad-8b30-4300-ad93-9d48e61f2440/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:587 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71755097-7543-48f8-8925-0e21650bf8f6/volumes/kubernetes.io~projected/kube-api-access-qvhfc:{mountpoint:/var/lib/kubelet/pods/71755097-7543-48f8-8925-0e21650bf8f6/volumes/kubernetes.io~projected/kube-api-access-qvhfc major:0 minor:824 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71755097-7543-48f8-8925-0e21650bf8f6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/71755097-7543-48f8-8925-0e21650bf8f6/volumes/kubernetes.io~secret/serving-cert major:0 minor:810 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74476be5-669a-4737-b93b-c4870423a4da/volumes/kubernetes.io~projected/kube-api-access-nvx6m:{mountpoint:/var/lib/kubelet/pods/74476be5-669a-4737-b93b-c4870423a4da/volumes/kubernetes.io~projected/kube-api-access-nvx6m major:0 minor:1024 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74476be5-669a-4737-b93b-c4870423a4da/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/74476be5-669a-4737-b93b-c4870423a4da/volumes/kubernetes.io~secret/cert major:0 minor:1022 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74795f5d-dcd7-4723-8931-c34b59ce3087/volumes/kubernetes.io~projected/kube-api-access-8rzsk:{mountpoint:/var/lib/kubelet/pods/74795f5d-dcd7-4723-8931-c34b59ce3087/volumes/kubernetes.io~projected/kube-api-access-8rzsk major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae/volumes/kubernetes.io~projected/kube-api-access-hww8g:{mountpoint:/var/lib/kubelet/pods/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae/volumes/kubernetes.io~projected/kube-api-access-hww8g major:0 minor:378 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/volumes/kubernetes.io~projected/kube-api-access-d89r9:{mountpoint:/var/lib/kubelet/pods/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/volumes/kubernetes.io~projected/kube-api-access-d89r9 major:0 minor:800 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:789 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~projected/kube-api-access-rw4s4:{mountpoint:/var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~projected/kube-api-access-rw4s4 major:0 minor:500 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~secret/encryption-config major:0 minor:498 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~secret/etcd-client major:0 minor:499 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~secret/serving-cert major:0 minor:497 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8cb5158f-2199-42c0-995a-8490c9ec8a95/volumes/kubernetes.io~projected/kube-api-access-p2chb:{mountpoint:/var/lib/kubelet/pods/8cb5158f-2199-42c0-995a-8490c9ec8a95/volumes/kubernetes.io~projected/kube-api-access-p2chb major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8cb5158f-2199-42c0-995a-8490c9ec8a95/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8cb5158f-2199-42c0-995a-8490c9ec8a95/volumes/kubernetes.io~secret/metrics-tls major:0 minor:433 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e812dd9-cd05-4e9e-8710-d0920181ece2/volumes/kubernetes.io~projected/kube-api-access-s54f9:{mountpoint:/var/lib/kubelet/pods/8e812dd9-cd05-4e9e-8710-d0920181ece2/volumes/kubernetes.io~projected/kube-api-access-s54f9 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/kube-api-access-tb7tz:{mountpoint:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/kube-api-access-tb7tz major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:435 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d/volumes/kubernetes.io~projected/kube-api-access-gpk5h:{mountpoint:/var/lib/kubelet/pods/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d/volumes/kubernetes.io~projected/kube-api-access-gpk5h major:0 minor:359 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d/volumes/kubernetes.io~secret/serving-cert major:0 minor:341 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91331360-dc70-45bb-a815-e00664bae6c4/volumes/kubernetes.io~projected/kube-api-access-8w8sl:{mountpoint:/var/lib/kubelet/pods/91331360-dc70-45bb-a815-e00664bae6c4/volumes/kubernetes.io~projected/kube-api-access-8w8sl major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/932a70df-3afe-4873-9449-ab6e061d3fe3/volumes/kubernetes.io~projected/kube-api-access-fv8x5:{mountpoint:/var/lib/kubelet/pods/932a70df-3afe-4873-9449-ab6e061d3fe3/volumes/kubernetes.io~projected/kube-api-access-fv8x5 major:0 minor:383 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~projected/kube-api-access-ghd2r:{mountpoint:/var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~projected/kube-api-access-ghd2r major:0 minor:92 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~secret/metrics-tls major:0 minor:85 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9cfd2323-c33a-4d80-9c25-710920c0e605/volumes/kubernetes.io~projected/kube-api-access-blfkg:{mountpoint:/var/lib/kubelet/pods/9cfd2323-c33a-4d80-9c25-710920c0e605/volumes/kubernetes.io~projected/kube-api-access-blfkg major:0 minor:1079 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9cfd2323-c33a-4d80-9c25-710920c0e605/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/9cfd2323-c33a-4d80-9c25-710920c0e605/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1074 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9cfd2323-c33a-4d80-9c25-710920c0e605/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/9cfd2323-c33a-4d80-9c25-710920c0e605/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1078 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~projected/kube-api-access-gmffc:{mountpoint:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~projected/kube-api-access-gmffc major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volum Mar 18 10:10:36.051944 master-0 kubenswrapper[30420]: es/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f5c64aa-676e-4e48-b714-02f6edb1d361/volumes/kubernetes.io~projected/kube-api-access-xttqt:{mountpoint:/var/lib/kubelet/pods/9f5c64aa-676e-4e48-b714-02f6edb1d361/volumes/kubernetes.io~projected/kube-api-access-xttqt major:0 minor:813 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f5c64aa-676e-4e48-b714-02f6edb1d361/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/9f5c64aa-676e-4e48-b714-02f6edb1d361/volumes/kubernetes.io~secret/cert major:0 minor:808 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9fc664ff-2e8f-441d-82dc-8f21c1d362d7/volumes/kubernetes.io~projected/kube-api-access-x46bf:{mountpoint:/var/lib/kubelet/pods/9fc664ff-2e8f-441d-82dc-8f21c1d362d7/volumes/kubernetes.io~projected/kube-api-access-x46bf major:0 minor:358 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9fc664ff-2e8f-441d-82dc-8f21c1d362d7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9fc664ff-2e8f-441d-82dc-8f21c1d362d7/volumes/kubernetes.io~secret/serving-cert major:0 minor:342 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~projected/kube-api-access-cxv6v:{mountpoint:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~projected/kube-api-access-cxv6v major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/etcd-client major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3657106-1eea-4031-8c92-85ba6287b425/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a3657106-1eea-4031-8c92-85ba6287b425/volumes/kubernetes.io~projected/kube-api-access major:0 minor:731 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~projected/kube-api-access-sxf74:{mountpoint:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~projected/kube-api-access-sxf74 major:0 minor:1208 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:1207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:1204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:1205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:1206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aaadd000-4db7-4264-bfc1-b0ad63c8fb05/volumes/kubernetes.io~projected/kube-api-access-v4qbs:{mountpoint:/var/lib/kubelet/pods/aaadd000-4db7-4264-bfc1-b0ad63c8fb05/volumes/kubernetes.io~projected/kube-api-access-v4qbs major:0 minor:1023 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/kube-api-access-nwfph:{mountpoint:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/kube-api-access-nwfph major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~secret/metrics-tls major:0 minor:453 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af1bbeee-1faf-43d1-943f-ee5319cef4e9/volumes/kubernetes.io~projected/kube-api-access-nkvcs:{mountpoint:/var/lib/kubelet/pods/af1bbeee-1faf-43d1-943f-ee5319cef4e9/volumes/kubernetes.io~projected/kube-api-access-nkvcs major:0 minor:1101 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af1bbeee-1faf-43d1-943f-ee5319cef4e9/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/af1bbeee-1faf-43d1-943f-ee5319cef4e9/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1092 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af1bbeee-1faf-43d1-943f-ee5319cef4e9/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/af1bbeee-1faf-43d1-943f-ee5319cef4e9/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1097 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0f77d68-f228-4f82-befb-fb2a2ce2e976/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/b0f77d68-f228-4f82-befb-fb2a2ce2e976/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:544 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0f77d68-f228-4f82-befb-fb2a2ce2e976/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/b0f77d68-f228-4f82-befb-fb2a2ce2e976/volumes/kubernetes.io~empty-dir/tmp major:0 minor:566 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0f77d68-f228-4f82-befb-fb2a2ce2e976/volumes/kubernetes.io~projected/kube-api-access-t77j8:{mountpoint:/var/lib/kubelet/pods/b0f77d68-f228-4f82-befb-fb2a2ce2e976/volumes/kubernetes.io~projected/kube-api-access-t77j8 major:0 minor:567 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6948f93-b573-4f09-b754-aaa2269e2875/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/b6948f93-b573-4f09-b754-aaa2269e2875/volumes/kubernetes.io~projected/ca-certs major:0 minor:580 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6948f93-b573-4f09-b754-aaa2269e2875/volumes/kubernetes.io~projected/kube-api-access-t2g9q:{mountpoint:/var/lib/kubelet/pods/b6948f93-b573-4f09-b754-aaa2269e2875/volumes/kubernetes.io~projected/kube-api-access-t2g9q major:0 minor:603 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9c87410-8689-4884-b5a8-df3ecbb7f1a4/volumes/kubernetes.io~projected/kube-api-access-l5j9d:{mountpoint:/var/lib/kubelet/pods/b9c87410-8689-4884-b5a8-df3ecbb7f1a4/volumes/kubernetes.io~projected/kube-api-access-l5j9d major:0 minor:338 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~projected/kube-api-access-zlxfz:{mountpoint:/var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~projected/kube-api-access-zlxfz major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~projected/kube-api-access-2ktpl:{mountpoint:/var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~projected/kube-api-access-2ktpl major:0 minor:147 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~secret/webhook-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdf80ddc-7c99-4f60-814b-ba98809ef41d/volumes/kubernetes.io~projected/kube-api-access-bql7p:{mountpoint:/var/lib/kubelet/pods/bdf80ddc-7c99-4f60-814b-ba98809ef41d/volumes/kubernetes.io~projected/kube-api-access-bql7p major:0 minor:834 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdf80ddc-7c99-4f60-814b-ba98809ef41d/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/bdf80ddc-7c99-4f60-814b-ba98809ef41d/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:865 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdf80ddc-7c99-4f60-814b-ba98809ef41d/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/bdf80ddc-7c99-4f60-814b-ba98809ef41d/volumes/kubernetes.io~secret/webhook-cert major:0 minor:866 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~projected/kube-api-access-p4hfd:{mountpoint:/var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~projected/kube-api-access-p4hfd major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:432 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:431 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/caec44dc-aab7-4407-b34a-52bbe4b4f635/volumes/kubernetes.io~projected/kube-api-access-xml27:{mountpoint:/var/lib/kubelet/pods/caec44dc-aab7-4407-b34a-52bbe4b4f635/volumes/kubernetes.io~projected/kube-api-access-xml27 major:0 minor:801 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/caec44dc-aab7-4407-b34a-52bbe4b4f635/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/caec44dc-aab7-4407-b34a-52bbe4b4f635/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:796 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850/volumes/kubernetes.io~projected/kube-api-access-jmnjp:{mountpoint:/var/lib/kubelet/pods/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850/volumes/kubernetes.io~projected/kube-api-access-jmnjp major:0 minor:798 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:794 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~projected/kube-api-access-cxj5c:{mountpoint:/var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~projected/kube-api-access-cxj5c major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~projected/kube-api-access-lhzg4:{mountpoint:/var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~projected/kube-api-access-lhzg4 major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4d2218c-f9df-4d43-8727-ed3a920e23f7/volumes/kubernetes.io~projected/kube-api-access-w4qp9:{mountpoint:/var/lib/kubelet/pods/d4d2218c-f9df-4d43-8727-ed3a920e23f7/volumes/kubernetes.io~projected/kube-api-access-w4qp9 major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4d2218c-f9df-4d43-8727-ed3a920e23f7/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/d4d2218c-f9df-4d43-8727-ed3a920e23f7/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:583 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da04c6fa-4916-4bed-a6b2-cc92bf2ee379/volumes/kubernetes.io~projected/kube-api-access-vq4rm:{mountpoint:/var/lib/kubelet/pods/da04c6fa-4916-4bed-a6b2-cc92bf2ee379/volumes/kubernetes.io~projected/kube-api-access-vq4rm major:0 minor:493 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da04c6fa-4916-4bed-a6b2-cc92bf2ee379/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/da04c6fa-4916-4bed-a6b2-cc92bf2ee379/volumes/kubernetes.io~secret/metrics-tls major:0 minor:502 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db376fea-5756-4bc2-9685-f32730b5a6f7/volumes/kubernetes.io~projected/kube-api-access-r6qn5:{mountpoint:/var/lib/kubelet/pods/db376fea-5756-4bc2-9685-f32730b5a6f7/volumes/kubernetes.io~projected/kube-api-access-r6qn5 major:0 minor:332 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db52ca42-e458-407f-9eeb-bf6de6405edc/volumes/kubernetes.io~projected/kube-api-access-jx9p2:{mountpoint:/var/lib/kubelet/pods/db52ca42-e458-407f-9eeb-bf6de6405edc/volumes/kubernetes.io~projected/kube-api-access-jx9p2 major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db52ca42-e458-407f-9eeb-bf6de6405edc/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/db52ca42-e458-407f-9eeb-bf6de6405edc/volumes/kubernetes.io~secret/srv-cert major:0 minor:582 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5e0836f-c0b4-40cd-9f63-55774da2740e/volumes/kubernetes.io~projected/kube-api-access-k94j4:{mountpoint:/var/lib/kubelet/pods/e5e0836f-c0b4-40cd-9f63-55774da2740e/volumes/kubernetes.io~projected/kube-api-access-k94j4 major:0 minor:912 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5e0836f-c0b4-40cd-9f63-55774da2740e/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/e5e0836f-c0b4-40cd-9f63-55774da2740e/volumes/kubernetes.io~secret/proxy-tls major:0 minor:908 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc/volumes/kubernetes.io~projected/kube-api-access-59hld:{mountpoint:/var/lib/kubelet/pods/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc/volumes/kubernetes.io~projected/kube-api-access-59hld major:0 minor:501 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~projected/kube-api-access-wj9sq:{mountpoint:/var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~projected/kube-api-access-wj9sq major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee376320-9ca0-444d-ab37-9cbcb6729b11/volumes/kubernetes.io~projected/kube-api-access-25k9g:{mountpoint:/var/lib/kubelet/pods/ee376320-9ca0-444d-ab37-9cbcb6729b11/volumes/kubernetes.io~projected/kube-api-access-25k9g major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee376320-9ca0-444d-ab37-9cbcb6729b11/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/ee376320-9ca0-444d-ab37-9cbcb6729b11/volumes/kubernetes.io~secret/srv-cert major:0 minor:584 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~projected/kube-api-access-f25pg:{mountpoint:/var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~projected/kube-api-access-f25pg major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f69a00b6-d908-4485-bb0d-57594fc01d24/volumes/kubernetes.io~projected/kube-api-access-5r7qd:{mountpoint:/var/lib/kubelet/pods/f69a00b6-d908-4485-bb0d-57594fc01d24/volumes/kubernetes.io~projected/kube-api-access-5r7qd major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f69a00b6-d908-4485-bb0d-57594fc01d24/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/f69a00b6-d908-4485-bb0d-57594fc01d24/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:585 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f875878f-3588-42f1-9488-750d9f4582f8/volumes/kubernetes.io~projected/kube-api-access-nn7zt:{mountpoint:/var/lib/kubelet/pods/f875878f-3588-42f1-9488-750d9f4582f8/volumes/kubernetes.io~projected/kube-api-access-nn7zt major:0 minor:1220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f875878f-3588-42f1-9488-750d9f4582f8/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/f875878f-3588-42f1-9488-750d9f4582f8/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f88c2a18-11f5-45ef-aff1-3c5976716d85/volumes/kubernetes.io~projected/kube-api-access-scz6j:{mountpoint:/var/lib/kubelet/pods/f88c2a18-11f5-45ef-aff1-3c5976716d85/volumes/kubernetes.io~projected/kube-api-access-scz6j major:0 minor:812 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f88c2a18-11f5-45ef-aff1-3c5976716d85/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/f88c2a18-11f5-45ef-aff1-3c5976716d85/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:809 fsType:tmpfs blockSize:0} overlay_0-1005:{mountpoint:/var/lib/containers/storage/overlay/048455be9ad31468e8ef7a6f3431fa9aa5094d0e6bcd16fc6c4e0777e8324203/merged major:0 minor:1005 fsType:overlay blockSize:0} overlay_0-1009:{mountpoint:/var/lib/containers/storage/overlay/171cd4a8eabf3c6cc5a855a513e2893218c05c958ba02638637ea338062fc7f4/merged major:0 minor:1009 fsType:overlay blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/ff10701b53dea463c4824d679f82a5633a3a3662483486b7662ccfbc786c2e6f/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/148657cc95fa478a2bd801a392f5217143be02fa5653ee8774da652779481d2b/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1032:{mountpoint:/var/lib/containers/storage/overlay/b5264b5d8bebbffdcd9266b7ea727334bebea207f1848c766242fd162a0341b5/merged major:0 minor:1032 fsType:overlay blockSize:0} overlay_0-1036:{mountpoint:/var/lib/containers/storage/overlay/51fce284ad6663ebb470b7cc9d404d6844af660910d47a26edf31cb916227b2d/merged major:0 minor:1036 fsType:overlay blockSize:0} overlay_0-1038:{mountpoint:/var/lib/containers/storage/overlay/2c73a64e17d4085ce112fe634fb6f5576d40b2ebc3914dd771111b952d9b4fa1/merged major:0 minor:1038 fsType:overlay blockSize:0} overlay_0-1040:{mountpoint:/var/lib/containers/storage/overlay/3fb042bfbd4047b781b2720ef4a9eed98526a3c9f057fc69f90b315c7af1510d/merged major:0 minor:1040 fsType:overlay blockSize:0} overlay_0-1043:{mountpoint:/var/lib/containers/storage/overlay/6adcf7b54bd5a5827cbb9abdcf08af6acf61f760a0c46bcb499ac578b7f3990d/merged major:0 minor:1043 fsType:overlay blockSize:0} overlay_0-1049:{mountpoint:/var/lib/containers/storage/overlay/c3957bf585ba766d0c8dde8b3def4d3c9a8fa98e1382bfeef484bf28fdeaa35b/merged major:0 minor:1049 fsType:overlay blockSize:0} overlay_0-1056:{mountpoint:/var/lib/containers/storage/overlay/5fef26ea2bda8aaf9ee330bf4fc6315b604585157e8f2ec05e2dcec5175fd14d/merged major:0 minor:1056 fsType:overlay blockSize:0} overlay_0-1070:{mountpoint:/var/lib/containers/storage/overlay/cb8262d05331b80393463846094128b1105705c8ed6b74ff4e63e7f531320a51/merged major:0 minor:1070 fsType:overlay blockSize:0} overlay_0-1072:{mountpoint:/var/lib/containers/storage/overlay/b052161ae31c77fcaee720610f1539066d704bd824bb4db71f58a7f5b27806ea/merged major:0 minor:1072 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/58c82704cb91b772295288a6d56e3732d02eb8f5f88c875b351e8dd4c60b2f41/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/fa5baa4ee99e96d40efc88f7c448b5d054952b13a3fd9e89d3bf1db335374719/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1086:{mountpoint:/var/lib/containers/storage/overlay/dbcedb8a9a6e02d574afb22c38f9ed629f56b1400bd4300f4b9b658b0b8294ea/merged major:0 minor:1086 fsType:overlay blockSize:0} overlay_0-1106:{mountpoint:/var/lib/containers/storage/overlay/7d5902456a4a16524a12dce13a18c2cfaff8a7b41f4a0a1ceac00b13f36f5bed/merged major:0 minor:1106 fsType:overlay blockSize:0} overlay_0-1108:{mountpoint:/var/lib/containers/storage/overlay/3c1f630892fe2ccce18daca6b5256dbd38d0f9df6953b8b29dc3d32c78bf2f0a/merged major:0 minor:1108 fsType:overlay blockSize:0} overlay_0-1110:{mountpoint:/var/lib/containers/storage/overlay/3c3081af2b57caec5c4b40ec8b82ce1ac1884cf6cd16c0a7e59f7fb8637d8a88/merged major:0 minor:1110 fsType:overlay blockSize:0} overlay_0-1116:{mountpoint:/var/lib/containers/storage/overlay/df13d3fb605741b111fec55f56b2fa4c46b5ae5b2ebf81dd894959e72201a17e/merged major:0 minor:1116 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/64d9d5418f129ed611c9422f03d109fc9b949b36525f96554a1557f2b85f34a3/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-1122:{mountpoint:/var/lib/containers/storage/overlay/6783523a4c6d70936ee4992562a03710aaecd9f68bfc0ff7bb36d1c1c06af342/merged major:0 minor:1122 fsType:overlay blockSize:0} overlay_0-1127:{mountpoint:/var/lib/containers/storage/overlay/640b0bd397a185412aa686c0cae754fac164cf03a214f7a5b1ca55df86237b31/merged major:0 minor:1127 fsType:overlay blockSize:0} overlay_0-1129:{mountpoint:/var/lib/containers/storage/overlay/2e75acc69e15bc01495740d571efa0a60db472d38e003ff1abc33d91d9fb4758/merged major:0 minor:1129 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/83386d9f76a53867a9839739932ed861684536b6757598282da4c690bf13fa93/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-1131:{mountpoint:/var/lib/containers/storage/overlay/2ec2d80347d4890541dd8d693049556264d8dbe39bb67e055ee7da38000e3047/merged major:0 minor:1131 fsType:overlay blockSize:0} overlay_0-1133:{mountpoint:/var/lib/containers/storage/overlay/dc44ec579e6f2555d2171a392346bb26c0aca80b2a7bb4c315f7734d795cba02/merged major:0 minor:1133 fsType:overlay blockSize:0} overlay_0-1135:{mountpoint:/var/lib/containers/storage/overlay/cfbcea9b99e17f46f5ed2ec547c1915e37b6b7b7741cfb7e0210016bebeded5d/merged major:0 minor:1135 fsType:overlay blockSize:0} overlay_0-1150:{mountpoint:/var/lib/containers/storage/overlay/b29e57050a74f00b4242be333fb6889a84f00e8d24557bb0bafb7401def904a7/merged major:0 minor:1150 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/fdb47f4b9d5df1e8c3330bccb5acd47e483e24f87905c4bff7b379cbd5d3ff04/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1164:{mountpoint:/var/lib/containers/storage/overlay/5c0bc336b7e39a97a59b6828bf9bcf78331d1e91c9b694756ba403716c23870b/merged major:0 minor:1164 fsType:overlay blockSize:0} overlay_0-1166:{mountpoint:/var/lib/containers/storage/overlay/f3639b1bfcecf2ed6d07cf452d9449a5e98e3c295f8ccf193b6a1ab00d9dda1c/merged major:0 minor:1166 fsType:overlay blockSize:0} overlay_0-1177:{mountpoint:/var/lib/containers/storage/overlay/5ea35db3cd3251cee715b255dac684bc1639da692f2a3191f06835756f971714/merged major:0 minor:1177 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/4bc48671c974ede535f26e3a419fb9349af4289622851a7023cb29880dd10c2f/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-1211:{mountpoint:/var/lib/containers/storage/overlay/3adecd52c27c8ed857a076d8d4a0992523d63a4a2fb18fadc530035c81d9a8dd/merged major:0 minor:1211 fsType:overlay blockSize:0} overlay_0-1213:{mountpoint:/var/lib/containers/storage/overlay/0ca31c4f6be6a4a7ca18b734384a6563905e8e83ca00f6507eab1b820c2e22d5/merged major:0 minor:1213 fsType:overlay blockSize:0} overlay_0-1223:{mountpoint:/var/lib/containers/storage/overlay/8e4053ab906c334e02c147156657ee013e0e9ff71ce136c10bea592f74bb41de/merged major:0 minor:1223 fsType:overlay blockSize:0} overlay_0-1225:{mountpoint:/var/lib/containers/storage/overlay/c6079297d5a5e215f0e2de3d6ccdcf81cb9366f504fbd9b02185f1d0f531930c/merged major:0 minor:1225 fsType:overlay blockSize:0} overlay_0-1230:{mountpoint:/var/lib/containers/storage/overlay/128ca53d5113da801d2aed7a886ba7622c33344ab8364e34f1e7836dab91b572/merged major:0 minor:1230 fsType:overlay blockSize:0} overlay_0-1235:{mountpoint:/var/lib/containers/storage/overlay/b454331ae56748cd20f4a2a7aba57ef0a8853be92b692a58fcb89e6b7379328e/merged major:0 minor:1235 fsType:overlay blockSize:0} overlay_0-1237:{mountpoint:/var/lib/containers/storage/overlay/6a993840cd094773affee280d33323f7d251d2444fd5cc97157d96aa56db2809/merged major:0 minor:1237 fsType:overlay blockSize:0} overlay_0-130:{mountpoint:/var/lib/containers/storage/overlay/0fa8acbcd4c109a3a88bae9a31a51530836aba4a70311fc70c59d120045c3fce/merged major:0 minor:130 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/e2c237d8aed33ac38d3c45ef5ae079d952eba20eea8d3a5093e07f997f92acb8/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/d9039efa2afa273f1fbc0bc8bc890e1b7fa5f4e326660e96287b45bc5f2da142/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-139:{mountpoint:/var/lib/containers/storage/overlay/4efea80271bc0728f3f17a65ec711ca974a99f2c0c14d5295468915b5c717a97/merged major:0 minor:139 fsType:overlay blockSize:0} overlay_0-141:{mountpoint:/var/lib/containers/storage/overlay/203e77a7b17403fca9fdb098ca21cf827a03ebfb6c362ae3f28a7c3c88f12f76/merged major:0 minor:141 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/9b489f6c061bc46aa61bff12ca97aebbb9a02f49dd72d4b847c3f6dc7a4ac084/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/c21e9a8db214fd8ef2c468ded71f8c42703e0f08cfc3f6df1bbe8dec5ff4713c/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-155:{mountpoint:/var/lib/containers/storage/overlay/16659cbc0c302ac6ff4aaffa466b30df482a4fd19e73b6ce197e6fe8df53830e/merged major:0 minor:155 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/6d5b3297e5ada99accdc4e1892f8693a309e81334d6b6caeb0200735a8a40b9c/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/b96213d3ceb81fa6960c350230c24cc2b358f3ee790d34de820d74a7524c528b/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/27de4b29cb6c80045f1c650a445775d66c5eb0441c7a898e11fc38cccadbeb5d/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/377a8de062c0d25e4f8958f39df6266e0b414476ffb2a252382a7ba465a3d6ac/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/093430c757965110c5b283d8971c8fddf482d3b6b0afc4c844d55c840195af10/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/4e44f9c338319ce2e784616ab696c22924a1d0c6f1058cb8880792b0575e1385/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/e2c6f1dac9da5e4e145ea2bb5c601de1a380722ddfb1b2452b5b1bf11ceb8912/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/d08ca1767c03e84dda4fac9547dc09605f540d1877be3a3094288714e045dfb8/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/a3cf945a9ff0c485545bb6ee95c46759a21967c7986331c1d9a4e243e58fbcaa/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-261:{mountpoint:/var/lib/containers/storage/overlay/3434a780a2bb82029b9bd63bce5b9b1734abe24605a918bae57731ae2bb4d3d8/merged major:0 minor:261 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/69d771125edd4129e8f9be89ff23ae5fd6b9cd660e534f2a146027478ee2bcdf/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/301958ab817a6c5e6190cee7f5a5c46895a49398cff0c83c7cf479a46ed8da10/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/6fb40569e83c0e07966bad2311bced539e8a255ccb224ff61d850b63fbd9858b/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/abce63d8e805d33f68144e192704f4b7fcd37fd0c2d249cba811d26bc97446bf/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/f7e4d26dff6ff3f65902128e2a98e450cb4b2123a159e59ee8faf61606fc7336/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/f69102aa53ee5cbd6d4d5785743c9e7dc85277805dc072e812ba813a8c577033/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/5718e1f0c769597cd32253f880edc05f2943a6aede5e5ed60d24b23442d6bbe4/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/2a8a6ac1832281932c965ea9d6d8324c09bd365f0266f19fc890014c6320d676/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/8840f7c5c59fa72e0ba660c84d61456e42100f73cb1a4264514dd61e56f6fa66/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/57c3ffa344cc9d29258adcc64b8ddd95de1008fea4fd331d85022de56e4dec95/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/91711ef80ad25e413c6a6304ad487ce71652b46b839ed9c5e0828a30c452b03b/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/48a11d4dbfb229c8c14fdd613bdb330f4dc444d3cf63dc384efd91d252eb7775/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/20320ccfebb19e72c4b453f6b431bc4aabf0a450994031dd6145b429f2ead726/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/7431d6bfd8f5bb127002010982e4f57a5ebfc3680c838e5795fcb94f7bcc5153/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/3ed90baffed4c0eb203521712b200848ff80c3c6d39f16440f4e20ad636f4ba8/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/a285f0ae1f62985e3687051d9f6d47f3ed6f65b36b91c1fc0fc0eb0fa3719394/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/81d1ac5201bfe39be395a485d6abf4eb2d056ea036e98439e9471776bd193ca6/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-333:{mountpoint:/var/lib/containers/storage/overlay/39a8e1a292605cafb36fd1be9d669227d15c3466eb80dd1d3c1d6a69e03650e3/merged major:0 minor:333 fsType:overlay blockSize:0} overlay_0-347:{mountpoint:/var/lib/containers/storage/overlay/2c3a4ad4dcf0f415d794c708cb269f087b8d67f1b5376a72f8648fce047ee21a/merged major:0 minor:347 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/a5ba73cecfadb29d84724023106a968afce281cd1c072f394e553f56bb1dc1e0/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-366:{mountpoint:/var/lib/containers/storage/overlay/49b03e72c02bd48fbc4cd16f77fffbe644fe5efcb6e70c820191e73e189e9142/merged major:0 minor:366 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/d92365133220c82423a335d193e48cbfbb9d19220640c92a990cce054cd56266/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-381:{mountpoint:/var/lib/containers/storage/overlay/b74a69d283ddbbce0a45800fd2f5b8159ddcc802e70c714ff54f7d8775d843a7/merged major:0 minor:381 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/b0fa71ce4004282a7fa567f64ae9badf631e1ec4d13d2d1a4c534a69e1133131/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/d85253df0141f15e363cc459f1e192b40eadde2ba0d3247329c0f190ee7b3118/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/1936bde3969e81777c723708fa9c4b0479565278e08c42b948de88d954fffe71/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-394:{mountpoint:/var/lib/containers/storage/overlay/ef666654a067a380ae817f260175092d22aab0654003ab7d35584ae49238a965/merged major:0 minor:394 fsType:overlay blockSize:0} overlay_0-395:{mountpoint:/var/lib/containers/storage/overlay/0a32863748a010eae30ec4375fad5e64766a039b32ffac4c8e221387bbe48677/merged major:0 minor:395 fsType:overlay blockSize:0} overlay_0-398:{mountpoint:/var/lib/containers/storage/overlay/2a2d9e7088a5672d06a5c6f39bb7686988c9403e606cb252916253e041f50492/merged major:0 minor:398 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/3b64e149fccefe72cb477157f99575b0299d58474b88db3175626446d0190128/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/8f1611edb9cc942c8dc18531501375d36842c0b5777b8936f35f8846f6b66cd3/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-405:{mountpoint:/var/lib/containers/storage/overlay/8271f8c433b5da77aebf5aa06c33e0a041c1bb92a18e2eeeb498cc271ad2a25f/merged major:0 minor:405 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/06d2d667373f43d3d9743a1f567de1350bc3c09bfd8cbf606f4e512d351591bd/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-411:{mountpoint:/var/lib/containers/storage/overlay/e0aa5ac26d79221cc22f762439d51c63c93ac8f03598c0b24908548748c39a35/merged major:0 minor:411 fsType:overlay blockSize:0} overlay_0-42:{mountpoint:/var/lib/containers/storage/overlay/839e085b30ba48430f046be053a0411aac5f671cd214069e07e67f01337770d7/merged major:0 minor:42 fsType:overlay blockSize:0} overlay_0-425:{mountpoint:/var/lib/containers/storage/overlay/02ff52cb8a82c562b479a4a4bf645dc98a29492b4828843dfec863af76e34936/merged major:0 minor:425 fsType:overlay blockSize:0} overlay_0-429:{mountpoint:/var/lib/containers/storage/overlay/3ca36055cd41394cc1a6838b00953a8e7493a2ac5e41b3191c0c03c14b2e3181/merged major:0 minor:429 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/06ad104854fa7ee3e10c062ba0398191cdb1c927a6df281dfbb767a5c2bcd5a1/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-436:{mountpoint:/var/lib/containers/storage/overlay/180ff789a007c70f57a72a9c9d3f45788b03aa6f210c12f75b0f9b7caaa207d1/merged major:0 minor:436 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/23d391f94ceb16bee400a248795818b34bf2d8969cca0f38f9a40a8f060b7692/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-442:{mountpoint:/var/lib/containers/storage/overlay/f5e69f2625a751896444e75825c04cd9d5cdc77360f9a17a8fc689306d58a2b0/merged major:0 minor:442 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/c6510539f82184255adeb3630b95f9405baafce70dd31321dbf462e1ad5d9b62/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-448:{mountpoint:/var/lib/containers/storage/overlay/f5ac552ba1048b7e95b23d9601c9d5f5ce350117bcdfffee67b353a95911e741/merged major:0 minor:448 fsType:overlay blockSize:0} overlay_0-450:{mountpoint:/var/lib/containers/storage/overlay/231e31e871216c4e106207352e72d3e5e2f7b3ff38773cbe571a6596adb76d26/merged major:0 minor:450 fsType:overlay blockSize:0} overlay_0-452:{mountpoint:/var/lib/containers/storage/overlay/04235afc9319ac672d3f1ed70aabdb0bbda3a99d5c8f3d0d3e745e5687efa011/merged major:0 minor:452 fsType:overlay blockSize:0} overlay_0-456:{mountpoint:/var/lib/containers/storage/overlay/587eb4427eee380c0db4bbe07392a88bee36567620fccc6e0ee3070e7491e6b7/merged major:0 minor:456 fsType:overlay blockSize:0} overlay_0-458:{mountpoint:/var/lib/containers/storage/overlay/20701019add378ff7de7ee9b6b675e46a709ae2d82a6365574decece23a5e0de/merged major:0 minor:458 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/bc3f8736ba513ec6084da39871b4606bd2da3fc5b53b3e06a26db061011c1cc2/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-461:{mountpoint:/var/lib/containers/storage/overlay/4886c83e6a682f70d82c3f7aaa578ef7c05d35a5824f177e91428a6acb39bd46/merged major:0 minor:461 fsType:overlay blockSize:0} overlay_0-462:{mountpoint:/var/lib/containers/storage/overlay/d08b953c66f25f62bc5536cdd7a41edf29359258b874219f7e698a6268c781d8/merged major:0 minor:462 fsType:overlay blockSize:0} overlay_0-471:{mountpoint:/var/lib/containers/storage/overlay/5fd196114bae528b189f6bdd2c5a0281253bb5b82ece25b885a8c64d7dfe5bbd/merged major:0 minor:471 fsType:overlay blockSize:0} overlay_0-475:{mountpoint:/var/lib/containers/storage/overlay/97f93abe643f84b559b7a9cbff5d46dadca1abc957a014d3e61fa8b910893d30/merged major:0 minor:475 fsType:overlay blockSize:0} overlay_0-481:{mountpoint:/var/lib/containers/storage/overlay/f71f08d5c6f9e3a226192b71d3bdca2c36eefd2e2eea85a0560e360d22cda73f/merged major:0 minor:481 fsType:overlay blockSize:0} overlay_0-488:{mountpoint:/var/lib/containers/storage/overlay/90901745d18c650c141940b0b5a7a840bf8a45f886783c67bed1577d15213d22/merged major:0 minor:488 fsType:overlay blockSize:0} overlay_0-496:{mountpoint:/var/lib/containers/storage/overlay/0f9f1cd8104e58037f75fb619fcfdcb3d6048360b0fff52724f65cb6c04e530f/merged major:0 minor:496 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/d3013eedfd61cc8efc7a5d0eab59e0d3f6a46bf32994989a27e18e2e745c79de/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-515:{mountpoint:/var/lib/containers/storage/overlay/bd5385c4669342db4353b823e247e7efd386346388f46c0bd60944e67deee981/merged major:0 minor:515 fsType:overlay blockSize:0} overlay_0-518:{mountpoint:/var/lib/containers/storage/overlay/c5d09c394ad180fb1c2703a73a5f988d2d20b00a08d189e741cf6e1c3aaeaf91/merged major:0 minor:518 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/145db2ef88cd3c072d348f18225e1bf1ccbb6a0c2ed1db75c37f38c7c4c5331b/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-521:{mountpoint:/var/lib/containers/storage/overlay/72bf28c8a4fbdddcd9a5202a747fe72b87f0d347fada9ddab1c9c17abf50cc62/merged major:0 minor:521 fsType:overlay blockSize:0} overlay_0-523:{mountpoint:/var/lib/containers/storage/overlay/84c65261f160868ad5b61e59ab89bfdc7792c3f27ac0928c993b9242a62c9f6a/merged major:0 minor:523 fsType:overlay blockSize:0} overlay_0-525:{mountpoint:/var/lib/containers/storage/overlay/6a52e930ea43d16538a9702d29b550874770cbe41345d87a2b5257e9ba1a2a5a/merged major:0 minor:525 fsType:overlay blockSize:0} overlay_0-527:{mountpoint:/var/lib/containers/storage/overlay/dab0bb0641a7220ca47881278d0d4701b548872e9e90399fce81b848de9a5710/merged major:0 minor:527 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/8545d14640b75b424e767d0ae4765f18fc11bc027244fe4d095f8dfed663c78c/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-532:{mountpoint:/var/lib/containers/storage/overlay/7e3bce03014dbee02abd435a3c3801035a4a855675261c9ff9b1ecb15e36e805/merged major:0 minor:532 fsType:overlay blockSize:0} overlay_0-542:{mountpoint:/var/lib/containers/storage/overlay/fb7ffdbecfe72199523dc6f3ecf09c29fb4b106eb1b3be0d1a99f1830b240f12/merged major:0 minor:542 fsType:overlay blockSize:0} overlay_0-547:{mountpoint:/var/lib/containers/storage/overlay/53a011095f85d6b4c8dc6111c48f47d4a7c5d7429dcbbeddb39a5cd7013dfb5c/merged major:0 minor:547 fsType:overlay blockSize:0} overlay_0-549:{mountpoint:/var/lib/containers/storage/overlay/e0e92d620cb5a6fc2556be45d3de7079c3fc0c4f5bc6e50fc04034d117f4fe30/merged major:0 minor:549 fsType:overlay blockSize:0} overlay_0-55:{mountpoint:/var/lib/containers/storage/overlay/3933858588af384b12b17b47f4dd61d712db26c4270f43029e108999e60825c7/merged major:0 minor:55 fsType:overlay blockSize:0} overlay_0-554:{mountpoint:/var/lib/containers/storage/overlay/b8a857af8d8cf3ca192da5d1b40551596ccd90d338ac42463d3997824311450e/merged major:0 minor:554 fsType:overlay blockSize:0} overlay_0-556:{mountpoint:/var/lib/containers/storage/overlay/b3e24c3a1efde603576d7db0bc2f3dd2ed4b8fc3f1ce19e8be1490911369bada/merged major:0 minor:556 fsType:overlay blockSize:0} overlay_0-558:{mountpoint:/var/lib/containers/storage/overlay/d23cf8c986b8ea9aca53ccdb5dfefa5e5b84417f5514895bd61bc90047c98df4/merged major:0 minor:558 fsType:overlay blockSize:0} overlay_0-568:{mountpoint:/var/lib/containers/storage/overlay/ece3791868d821725cf02241fff75bdf76c577d2fa66c16a7c5d0332f6f0fbae/merged major:0 minor:568 fsType:overlay blockSize:0} overlay_0-57:{mountpoint:/var/lib/containers/storage/overlay/29a5e454db004571c6a3e5880acfbb0c3a0e72c6fd84a929ca61455cf3ebe567/merged major:0 minor:57 fsType:overlay blockSize:0} overlay_0-570:{mountpoint:/var/lib/containers/storage/overlay/8580c386cf4228c8e2dfcbbd41ee929842733b10bf33b210336bcbda93c605fa/merged major:0 minor:570 fsType:overlay blockSize:0} overlay_0-59:{mountpoint:/var/lib/containers/storage/overlay/b5727faff65b21b8e88486bb897309c589fd4b4cbdd679084e2a591b79ce3829/merged major:0 minor:59 fsType:overlay blockSize:0} overlay_0-602:{mountpoint:/var/lib/containers/storage/overlay/a2769b1bfb57f4de3b4c33dee4732f0113c23e20dc4e443d7aa112aa9ccba660/merged major:0 minor:602 fsType:overlay blockSize:0} overlay_0-607:{mountpoint:/var/lib/containers/storage/overlay/73d3ae24657a2533f87441dbebdd917b496b32c235de07270443ed9cdca9c302/merged major:0 minor:607 fsType:overlay blockSize:0} overlay_0-609:{mountpoint:/var/lib/containers/storage/overlay/2826e66e33713a072d6761ea6434129aeff1f822cfc0fd74191c8f022730d8ff/merged major:0 minor:609 fsType:overlay blockSize:0} overlay_0-611:{mountpoint:/var/lib/containers/storage/overlay/2883d0d5a125dd5de61ea29fe299c2b3d52afddc08c9679eece1c441f223a73f/merged major:0 minor:611 fsType:overlay blockSize:0} overlay_0-615:{mountpoint:/var/lib/containers/storage/overlay/2927eb87f44bcfaf3e25280be344f5e496168731936c1e447969404a5bca1d5e/merged major:0 minor:615 fsType:overlay blockSize:0} overlay_0-617:{mountpoint:/var/lib/containers/storage/overlay/b4ff500c11f4206d6b8faecf58701df08bd7e6aa0a2364957eacc432fa0874c0/merged major:0 minor:617 fsType:overlay blockSize:0} overlay_0-619:{mountpoint:/var/lib/containers/storage/overlay/397d8da40696b07bcd80eac7d474e17fd9e474b61bd7123d301e4219ea35a341/merged major:0 minor:619 fsType:overlay blockSize:0} overlay_0-621:{mountpoint:/var/lib/containers/storage/overlay/f22bcb097d4de139128f45c3844246e22f7db7ddcb3eaa5cc084f3aadf217b57/merged major:0 minor:621 fsType:overlay blockSize:0} overlay_0-622:{mountpoint:/var/lib/containers/storage/overlay/2446acfb5da9ec9588bfaee17808114c27cb30d09bb04f5f972415dbf1bb472d/merged major:0 minor:622 fsType:overlay blockSize:0} overlay_0-624:{mountpoint:/var/lib/containers/storage/overlay/77c978acc7e00f2f6009dcf0c6cc04df3ec2d253f14bcd4c64e7b14a9152df14/merged major:0 minor:624 fsType:overlay blockSize:0} overlay_0-630:{mountpoint:/var/lib/containers/storage/overlay/d0bbe3b99e5b2ff8e718bf71022012bb3d0d1ffaa4e39f687b83d597c19a045e/merged major:0 minor:630 fsType:overlay blockSize:0} overlay_0-647:{mountpoint:/var/lib/containers/storage/overlay/db8fb284dcb25aee1b90a0c613690a659e1a3f6fe1678ddcd69cea33bbc13018/merged major:0 minor:647 fsType:overlay blockSize:0} overlay_0-654:{mountpoint:/var/lib/containers/storage/overlay/3aab5d40e4325852c743a4c5ad0aa6236e667d4db34495f5709b9137cd2c6ca2/merged major:0 minor:654 fsType:overlay blockSize:0} overlay_0-659:{mountpoint:/var/lib/containers/storage/overlay/e579dbc31871205a561e172935d73b49c4123f3ca196d91c40989194814d946f/merged major:0 minor:659 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/ac459cf207e1a1d07a828fd264e7d459556807e1ce7aafe4ca34e589f690fdbc/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-665:{mountpoint:/var/lib/containers/storage/overlay/779367ced6ebd60ed1b88064ee82fc6f1bbe926882e18a661b55b524fd61c4fb/merged major:0 minor:665 fsType:overlay blockSize:0} overlay_0-666:{mountpoint:/var/lib/containers/storage/overlay/b731e4e179b0b9735510a1ccc4bd2db1d223dbc33feeb700b189e004961cb0e4/merged major:0 minor:666 fsType:overlay blockSize:0} overlay_0-682:{mountpoint:/var/lib/containers/storage/overlay/5532930143daa20b430f0ba507ff91926ff544ebd9c873cda10a2a721367581a/merged major:0 minor:682 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/b7b1cb05c3b46e13599521112ae83a49411bab20610e1cfe28e04993090ff835/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-690:{mountpoint:/var/lib/containers/storage/overlay/1626e452a980078276258b335058164cf7634604f19ac52849eeaaf3cdeb2263/merged major:0 minor:690 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/894d8e9925e6cbacaf9b6aeb28bedbd75c06ac85e2f8ca7b399847550e1d4054/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-707:{mountpoint:/var/lib/containers/storage/overlay/05d3645e4970fcba2a9ee5f711d40d7b1b1b80f2cf8c16f5529034f65bda8703/merged major:0 minor:707 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/5090f1b18c2614c8921b36eabbf0e64417c80cdd6b57ed68355264c297474a9a/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/75cbce20fd74c58b48e407265d5f0c208e9697b5b51b073af0c5716314e937a7/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-722:{mountpoint:/var/lib/containers/storage/overlay/424a002f39baa6e12030ed6fc334362d0dcf1b7343e79d0bd1ef72cc6ab703c9/merged major:0 minor:722 fsType:overlay blockSize:0} overlay_0-726:{mountpoint:/var/lib/containers/storage/overlay/3913238ed46255ab92afbba7bbb6d4db01a0ab595387231553f614be1a6e771f/merged major:0 minor:726 fsType:overlay blockSize:0} overlay_0-733:{mountpoint:/var/lib/containers/storage/overlay/4f38c8dde81d144969c1cc6ed518f94fce7d9debfe6229bf0d222f67b24fc04d/merged major:0 minor:733 fsType:overlay blockSize:0} overlay_0-741:{mountpoint:/var/lib/containers/storage/overlay/1030f093e096e2f7f162ab37aac0887e6257b6fb63d9f055c0d9d72679d798d4/merged major:0 minor:741 fsType:overlay blockSize:0} overlay_0-757:{mountpoint:/var/lib/containers/storage/overlay/14df1ff8ea0037de2f3344ff9972738cf228678570eb06d2e91cd0c56ea20bde/merged major:0 minor:757 fsType:overlay blockSize:0} overlay_0-759:{mountpoint:/var/lib/containers/storage/overlay/ab778588874df1e32fae8b10909713c87243df297e4e47d89546c6b133c4801f/merged major:0 minor:759 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/586fc4c4f768de13f93a984239036317061e245426aee7ef8d96292bbf5cd1e7/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-766:{mountpoint:/var/lib/containers/storage/overlay/a61e1a0d42a6c5cc4e8f3dc51717334d29f396123bce83a8cd3ec436d1be720a/merged major:0 minor:766 fsType:overlay blockSize:0} overlay_0-771:{mountpoint:/var/lib/containers/storage/overlay/5087612945ac3d6c8d11edac93070e67051d8da2038eec514673e092b43e9396/merged major:0 minor:771 fsType:overlay blockSize:0} overlay_0-772:{mountpoint:/var/lib/containers/storage/overlay/2a4bbd7cff3a41ce10e1e653d0f63f406f83ba6a324dd82bac7e8028aa022250/merged major:0 minor:772 fsType:overlay blockSize:0} overlay_0-776:{mountpoint:/var/lib/containers/storage/overlay/4c07177fe33763bd584792637ffa186667014241902605ca95c2fb2d2d3bde8c/merged major:0 minor:776 fsType:overlay blockSize:0} overlay_0-778:{mountpoint:/var/lib/containers/storage/overlay/2fc2b09dd0399792365a69c46cc4803f3d0ea9c3220fd3936f4dbc3750c60560/merged major:0 minor:778 f Mar 18 10:10:36.052230 master-0 kubenswrapper[30420]: sType:overlay blockSize:0} overlay_0-781:{mountpoint:/var/lib/containers/storage/overlay/806cf4cc3d6ee7a941fa9aa8b6655cca0dc3add7adbb7a0797dbc8d7cb7fa06e/merged major:0 minor:781 fsType:overlay blockSize:0} overlay_0-783:{mountpoint:/var/lib/containers/storage/overlay/be8c7bce287461a9410f2d3e57c28981a2015a33e543ab12144d5e69f751fff8/merged major:0 minor:783 fsType:overlay blockSize:0} overlay_0-785:{mountpoint:/var/lib/containers/storage/overlay/425b5fbfd08c8087ce85437876b519a27a2b8c5fcf7f31f63dbae5dfcf499c73/merged major:0 minor:785 fsType:overlay blockSize:0} overlay_0-792:{mountpoint:/var/lib/containers/storage/overlay/d236d1a2ef2708af8279432c62ba15310807a15db98dd9031ff65749cee713ae/merged major:0 minor:792 fsType:overlay blockSize:0} overlay_0-804:{mountpoint:/var/lib/containers/storage/overlay/731c08f7fcdf1845aaf686dad332e85beaaccc9fb3155cd498ba29003035f56b/merged major:0 minor:804 fsType:overlay blockSize:0} overlay_0-828:{mountpoint:/var/lib/containers/storage/overlay/c75383ef47238014023c3397ceee3b2ee2ed3820c463eb5a949d36f7dc61ca8a/merged major:0 minor:828 fsType:overlay blockSize:0} overlay_0-838:{mountpoint:/var/lib/containers/storage/overlay/4a71d045d200871e7a46d1aacdc1cb676bc6b4d389b9b6757fdab0dd16972552/merged major:0 minor:838 fsType:overlay blockSize:0} overlay_0-839:{mountpoint:/var/lib/containers/storage/overlay/ccdfc6a73fd3816590fac907f86598f9974d3cc93d0ab6d2cc7db6e398ba26c6/merged major:0 minor:839 fsType:overlay blockSize:0} overlay_0-843:{mountpoint:/var/lib/containers/storage/overlay/4faacf76f2fd00f9fc98749cb6dd2bd1c564bd8d25db6bce91bf03d9560fd284/merged major:0 minor:843 fsType:overlay blockSize:0} overlay_0-845:{mountpoint:/var/lib/containers/storage/overlay/85644e754bc475e1df20e1c6139092c80ceaa572d22d45426ce50ef23a0a2fc6/merged major:0 minor:845 fsType:overlay blockSize:0} overlay_0-850:{mountpoint:/var/lib/containers/storage/overlay/bd9267397668ba0edce54503e3268d1653e2b0401e6cad4db438ffad9b98b02f/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-852:{mountpoint:/var/lib/containers/storage/overlay/009ceaaf4ff317c4df7f502405407f27a30d53365fd5f4cef66adf8591ec0007/merged major:0 minor:852 fsType:overlay blockSize:0} overlay_0-854:{mountpoint:/var/lib/containers/storage/overlay/da31f58b8a51443555f0627a87548f6bf33795187ffa8b400d2cb36a7b17defd/merged major:0 minor:854 fsType:overlay blockSize:0} overlay_0-858:{mountpoint:/var/lib/containers/storage/overlay/049dbfc68d39669a4a6987ad7a7ac8729c98e21f4f2b567e04935809bcfba18b/merged major:0 minor:858 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/38bf128a1cfc59d005f40e5357250871a5dcb74c03cdbd253be7f6b578149615/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/86d3cd99388bace5b03012f6f5fd860774279293b42219bb32cfcfc02a5114a4/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-862:{mountpoint:/var/lib/containers/storage/overlay/05bf9eb98384e61b38f28ed114aabe2a2e39694d098e8927921f75aaae9e9ba4/merged major:0 minor:862 fsType:overlay blockSize:0} overlay_0-867:{mountpoint:/var/lib/containers/storage/overlay/c0a77f2df9cf4382dafe9e8ab09d0e6a435d74ac96fff6654e07e3bc800dfa73/merged major:0 minor:867 fsType:overlay blockSize:0} overlay_0-878:{mountpoint:/var/lib/containers/storage/overlay/e7f7c5a27d7989798c792c21b2b126e748d488aa471b3448a561571ddb6e3e1f/merged major:0 minor:878 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/10f1a86d510c5577553ce39fd8ac863b22b7ced30986da46254c9e995c859a93/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/89eba39dc047b30c7574dd206053f007a96463e509354214eb992dc6729227c8/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-890:{mountpoint:/var/lib/containers/storage/overlay/b75d97fb5afa96972cba999e0dc4321fa36e5e23b1abcbce1f12883ea3fdc150/merged major:0 minor:890 fsType:overlay blockSize:0} overlay_0-892:{mountpoint:/var/lib/containers/storage/overlay/39b2fefe97f2a20970778a9ab80e7ff8f543873fff0d4c4e07d06f72199d0172/merged major:0 minor:892 fsType:overlay blockSize:0} overlay_0-896:{mountpoint:/var/lib/containers/storage/overlay/1c213fe8f8c890ed59172c6a8df20a8e1dd9039e3d918359eef34166d75bcff9/merged major:0 minor:896 fsType:overlay blockSize:0} overlay_0-898:{mountpoint:/var/lib/containers/storage/overlay/9d21cedf11891fc294557fda9b107a93afefb77ca79dfc499488b6b30e6d0ea3/merged major:0 minor:898 fsType:overlay blockSize:0} overlay_0-915:{mountpoint:/var/lib/containers/storage/overlay/0bef411c9737aaca9b13c92dfae4ed457d5e0fc1a1006b2cb17e67328084d64c/merged major:0 minor:915 fsType:overlay blockSize:0} overlay_0-919:{mountpoint:/var/lib/containers/storage/overlay/48dcad131a56bb43dec2c257c28397ab5ffc2a7e9b898792f02ecad32a7faa0f/merged major:0 minor:919 fsType:overlay blockSize:0} overlay_0-928:{mountpoint:/var/lib/containers/storage/overlay/dc8eb39fec0227e73e8fbe61afb8f4d1843801720dea09d8437e970095a1c397/merged major:0 minor:928 fsType:overlay blockSize:0} overlay_0-93:{mountpoint:/var/lib/containers/storage/overlay/025f12f30c513c76bb57ee083bdfca6e2e90ab8309f29a45c3208bca259f6ba4/merged major:0 minor:93 fsType:overlay blockSize:0} overlay_0-935:{mountpoint:/var/lib/containers/storage/overlay/0983aa4d5e4bd8534fb2147bc286eedf51e2ddf4c17392f2aecf92ff45e56322/merged major:0 minor:935 fsType:overlay blockSize:0} overlay_0-939:{mountpoint:/var/lib/containers/storage/overlay/9a2c9648ba3857a6e5584767ac7068164a9945acd4c90961c50c0137c5c69067/merged major:0 minor:939 fsType:overlay blockSize:0} overlay_0-952:{mountpoint:/var/lib/containers/storage/overlay/0611e70a5b7fc7dba69eb943fad28e53d7dae8a7acd70d92d7e3deb0921a0ef6/merged major:0 minor:952 fsType:overlay blockSize:0} overlay_0-955:{mountpoint:/var/lib/containers/storage/overlay/4eb076869c5a59d7f4a8525041d7865307286c87fe7b7dac22147d8c04a5b61e/merged major:0 minor:955 fsType:overlay blockSize:0} overlay_0-958:{mountpoint:/var/lib/containers/storage/overlay/1c3c0c1313f58774a7dafd75f7963c8e93c0d88e69717f8c66775c9a6f69e660/merged major:0 minor:958 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/6c0f75a5691afaf8d5951a5396f8420c731e5d1f63d0105bfc932f719a6547cc/merged major:0 minor:96 fsType:overlay blockSize:0} overlay_0-961:{mountpoint:/var/lib/containers/storage/overlay/5edf28e68415edd770cdf71bf8750fc71d7e38d89ea31be98d47270da669e477/merged major:0 minor:961 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/e23eb0d467c49315c3299c58c1425c6deac3f0fbc4dd60cf59c7b0773d58b484/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-988:{mountpoint:/var/lib/containers/storage/overlay/19d17627938b29b318a55382c1bf84ce1a2f11ee593bed3e7333cda3a356b0ef/merged major:0 minor:988 fsType:overlay blockSize:0}] Mar 18 10:10:36.090013 master-0 kubenswrapper[30420]: I0318 10:10:36.085242 30420 manager.go:217] Machine: {Timestamp:2026-03-18 10:10:36.084515929 +0000 UTC m=+0.137261868 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:2ce24ad926944999b07b278206f0e4a4 SystemUUID:2ce24ad9-2694-4999-b07b-278206f0e4a4 BootID:b58383dd-cfef-45af-ac7b-26a609b46986 Filesystems:[{Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91331360-dc70-45bb-a815-e00664bae6c4/volumes/kubernetes.io~projected/kube-api-access-8w8sl DeviceMajor:0 DeviceMinor:118 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-532 DeviceMajor:0 DeviceMinor:532 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-547 DeviceMajor:0 DeviceMinor:547 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-621 DeviceMajor:0 DeviceMinor:621 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~projected/kube-api-access-p4hfd DeviceMajor:0 DeviceMinor:245 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/613533c3a19224e9e30dba35639ecd39810b8db2f7864917803baa176a7bbed0/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-381 DeviceMajor:0 DeviceMinor:381 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-448 DeviceMajor:0 DeviceMinor:448 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/71755097-7543-48f8-8925-0e21650bf8f6/volumes/kubernetes.io~projected/kube-api-access-qvhfc DeviceMajor:0 DeviceMinor:824 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-609 DeviceMajor:0 DeviceMinor:609 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-558 DeviceMajor:0 DeviceMinor:558 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a8685da7c022ead7819bc14f1d28e93a2c0d8bd27bb5dc325c78a31a740e3f59/userdata/shm DeviceMajor:0 DeviceMinor:1209 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0ab9786ebf50a65e9432d654c3f52392db8e881a65fb26e7e3e002f1d0577eeb/userdata/shm DeviceMajor:0 DeviceMinor:354 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74795f5d-dcd7-4723-8931-c34b59ce3087/volumes/kubernetes.io~projected/kube-api-access-8rzsk DeviceMajor:0 DeviceMinor:303 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-521 DeviceMajor:0 DeviceMinor:521 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29fbc78b-1887-40d4-8165-f0f7cc40b583/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:815 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/009c5d81632f0f08c8ed08e157decfc8eae7a1397849b1d1bb183a9c0b36e696/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5900a401-21c2-47f0-a921-47c648da558d/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1105 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-741 DeviceMajor:0 DeviceMinor:741 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/187a5eb02f6d39f4d5d17d569f5578af7e87c01c9503e828b0f618e0f62581eb/userdata/shm DeviceMajor:0 DeviceMinor:320 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5900a401-21c2-47f0-a921-47c648da558d/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1096 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6f266bad-8b30-4300-ad93-9d48e61f2440/volumes/kubernetes.io~projected/kube-api-access-shbrj DeviceMajor:0 DeviceMinor:241 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-475 DeviceMajor:0 DeviceMinor:475 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d4d2218c-f9df-4d43-8727-ed3a920e23f7/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:583 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a62338b3d8b6fefea0ba1a5636a4c5079225838e71c631e7514905926d40be01/userdata/shm DeviceMajor:0 DeviceMinor:820 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ee376320-9ca0-444d-ab37-9cbcb6729b11/volumes/kubernetes.io~projected/kube-api-access-25k9g DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~projected/kube-api-access-wj9sq DeviceMajor:0 DeviceMinor:247 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1127 DeviceMajor:0 DeviceMinor:1127 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~projected/kube-api-access-ghd2r DeviceMajor:0 DeviceMinor:92 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-890 DeviceMajor:0 DeviceMinor:890 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b7ca349d109c7ce47be51e023fb21ab1709798444b4c309eab6316772a1ee596/userdata/shm DeviceMajor:0 DeviceMinor:1221 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/94d378b5868ac49c0d516b9285e21a09fb0d6dca212ba5b79072685e6b662578/userdata/shm DeviceMajor:0 DeviceMinor:355 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1155 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1205 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-690 DeviceMajor:0 DeviceMinor:690 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-862 DeviceMajor:0 DeviceMinor:862 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1160 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b588169f9714563a6db5379251857ae747425b95554009dbd48c296b2e82b297/userdata/shm DeviceMajor:0 DeviceMinor:360 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-155 DeviceMajor:0 DeviceMinor:155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~projected/kube-api-access-rw4s4 DeviceMajor:0 DeviceMinor:500 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:507 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1043 DeviceMajor:0 DeviceMinor:1043 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dda9475997ae063330eb66def313ccd5f6f56fc68307fe940171e35bbbb378fc/userdata/shm DeviceMajor:0 DeviceMinor:1068 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f875878f-3588-42f1-9488-750d9f4582f8/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-450 DeviceMajor:0 DeviceMinor:450 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-792 DeviceMajor:0 DeviceMinor:792 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-55 DeviceMajor:0 DeviceMinor:55 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06e9465576405f83c2377274bbe7c9f80c7e1d2afadf9ee173551a2f7f95d786/userdata/shm DeviceMajor:0 DeviceMinor:79 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b0f77d68-f228-4f82-befb-fb2a2ce2e976/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:544 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:453 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:498 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ee376320-9ca0-444d-ab37-9cbcb6729b11/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:584 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1005 DeviceMajor:0 DeviceMinor:1005 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1110 DeviceMajor:0 DeviceMinor:1110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-783 DeviceMajor:0 DeviceMinor:783 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-398 DeviceMajor:0 DeviceMinor:398 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a0f6a23031d96231e99cbb9f2b16dea4d913c0ee0df84104c4f8c08579a04daa/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c1ce07b6c7993e6988dcb73b0d0ae149fc17c7c6fa96dc548353a31db24514c/userdata/shm DeviceMajor:0 DeviceMinor:438 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5900a401-21c2-47f0-a921-47c648da558d/volumes/kubernetes.io~projected/kube-api-access-qtnxf DeviceMajor:0 DeviceMinor:1099 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-366 DeviceMajor:0 DeviceMinor:366 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1015 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/15afbeaf2b91c3dde6de78ecc76cf185217127e7fd54f971970a9dc91ec72267/userdata/shm DeviceMajor:0 DeviceMinor:1080 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-682 DeviceMajor:0 DeviceMinor:682 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-654 DeviceMajor:0 DeviceMinor:654 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-395 DeviceMajor:0 DeviceMinor:395 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8cb5158f-2199-42c0-995a-8490c9ec8a95/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:433 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6f266bad-8b30-4300-ad93-9d48e61f2440/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:587 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-838 DeviceMajor:0 DeviceMinor:838 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:497 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-523 DeviceMajor:0 DeviceMinor:523 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-458 DeviceMajor:0 DeviceMinor:458 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-602 DeviceMajor:0 DeviceMinor:602 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-852 DeviceMajor:0 DeviceMinor:852 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9cfd2323-c33a-4d80-9c25-710920c0e605/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1078 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-452 DeviceMajor:0 DeviceMinor:452 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-804 DeviceMajor:0 DeviceMinor:804 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~projected/kube-api-access-sxf74 DeviceMajor:0 DeviceMinor:1208 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-347 DeviceMajor:0 DeviceMinor:347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0a709f6a031857e3e4e56dda2c8a6cf2ebbad7bd036491c8c8d4d7ae887efd7b/userdata/shm DeviceMajor:0 DeviceMinor:1102 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9fc664ff-2e8f-441d-82dc-8f21c1d362d7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:342 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-624 DeviceMajor:0 DeviceMinor:624 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:794 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1040 DeviceMajor:0 DeviceMinor:1040 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1133 DeviceMajor:0 DeviceMinor:1133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-722 DeviceMajor:0 DeviceMinor:722 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:789 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/274e9b834559b126c9207a26c34fb18f9b1812e69065a033951f8808dc379847/userdata/shm DeviceMajor:0 DeviceMinor:437 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:979 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1049 DeviceMajor:0 DeviceMinor:1049 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d4d2218c-f9df-4d43-8727-ed3a920e23f7/volumes/kubernetes.io~projected/kube-api-access-w4qp9 DeviceMajor:0 DeviceMinor:234 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-845 DeviceMajor:0 DeviceMinor:845 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-858 DeviceMajor:0 DeviceMinor:858 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e5e0836f-c0b4-40cd-9f63-55774da2740e/volumes/kubernetes.io~projected/kube-api-access-k94j4 DeviceMajor:0 DeviceMinor:912 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-915 DeviceMajor:0 DeviceMinor:915 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~projected/kube-api-access-z459j DeviceMajor:0 DeviceMinor:1025 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1cb8ab19-0564-4182-a7e3-0943c1480663/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1104 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1237 DeviceMajor:0 DeviceMinor:1237 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-607 DeviceMajor:0 DeviceMinor:607 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a3657106-1eea-4031-8c92-85ba6287b425/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:731 Capacity:200003584 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfbca75437300b7b872afd6b8a0f67b07ac16a2585e869d44683d0377dfcaeaa/userdata/shm DeviceMajor:0 DeviceMinor:137 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:435 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2108f9b19bef72325cf7ce6838f94c4d93335d1acb2849349c2da5bf81571c7d/userdata/shm DeviceMajor:0 DeviceMinor:588 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/caec44dc-aab7-4407-b34a-52bbe4b4f635/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:796 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/308f045ad48f29df3fbed5a202a7ccbbb9fcab711591e6a10e9dfffd40505d42/userdata/shm DeviceMajor:0 DeviceMinor:802 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/74476be5-669a-4737-b93b-c4870423a4da/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:1022 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-935 DeviceMajor:0 DeviceMinor:935 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1211 DeviceMajor:0 DeviceMinor:1211 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1230 DeviceMajor:0 DeviceMinor:1230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-525 DeviceMajor:0 DeviceMinor:525 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9cfd2323-c33a-4d80-9c25-710920c0e605/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1074 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-611 DeviceMajor:0 DeviceMinor:611 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/481a20c56b1513a6550470d25ece05987dc0ad3be0f23f19f26b6d5a7a36ce42/userdata/shm DeviceMajor:0 DeviceMinor:830 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/accc57fb-75f5-4f89-9804-6ede7f77e27c/volumes/kubernetes.io~projected/kube-api-access-nwfph DeviceMajor:0 DeviceMinor:242 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-456 DeviceMajor:0 DeviceMinor:456 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:505 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/432f611b-a1a2-4cc9-b005-17a16413d281/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:658 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9f5c64aa-676e-4e48-b714-02f6edb1d361/volumes/kubernetes.io~projected/kube-api-access-xttqt DeviceMajor:0 DeviceMinor:813 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7483df25713a00b0ea8cbc4c6314a73f83bff54b160af6b49103c48fec6f8b1e/userdata/shm DeviceMajor:0 DeviceMinor:864 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad4aa30-f7d5-47ca-b01e-2643f7195685/volumes/kubernetes.io~projected/kube-api-access-fp8vt DeviceMajor:0 DeviceMinor:799 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-854 DeviceMajor:0 DeviceMinor:854 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5f264524ff7942903d23e39e84e002c2a4f349e860595476e5954b840e22c114/userdata/shm DeviceMajor:0 DeviceMinor:1124 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1129 DeviceMajor:0 DeviceMinor:1129 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-759 DeviceMajor:0 DeviceMinor:759 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a4ce30442f41beafbbdf0d0fcad6e463a305b377720e6060de4d2e923ec7031/userdata/shm DeviceMajor:0 DeviceMinor:1030 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/aaadd000-4db7-4264-bfc1-b0ad63c8fb05/volumes/kubernetes.io~projected/kube-api-access-v4qbs DeviceMajor:0 DeviceMinor:1023 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e/volumes/kubernetes.io~projected/kube-api-access-b46jq DeviceMajor:0 DeviceMinor:390 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1159 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/03355a5e2caa4496c4b10efd4243dd60c302d54b340a80972ebe3e5661f0dd6b/userdata/shm DeviceMajor:0 DeviceMinor:349 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0/volumes/kubernetes.io~projected/kube-api-access-gmxj9 DeviceMajor:0 DeviceMinor:1002 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-411 DeviceMajor:0 DeviceMinor:411 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29fbc78b-1887-40d4-8165-f0f7cc40b583/volumes/kubernetes.io~projected/kube-api-access-vm2nt DeviceMajor:0 DeviceMinor:819 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc23eb8c4f8df6172dfca6b7df2e710cff8ef0d5f4a2b6bc29af4b8dd83114fe/userdata/shm DeviceMajor:0 DeviceMinor:832 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-429 DeviceMajor:0 DeviceMinor:429 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e702a6208830f572cc3b5f2ed7735679946a02e12d549d40a5020b7820cc5f46/userdata/shm DeviceMajor:0 DeviceMinor:831 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-515 DeviceMajor:0 DeviceMinor:515 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-776 DeviceMajor:0 DeviceMinor:776 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/196e7607-1ddf-467b-9901-b4be746130a1/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1058 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1072 DeviceMajor:0 DeviceMinor:1072 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/af1bbeee-1faf-43d1-943f-ee5319cef4e9/volumes/kubernetes.io~projected/kube-api-access-nkvcs DeviceMajor:0 DeviceMinor:1101 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/62b82d72-d73c-451a-84e1-551d73036aa8/volumes/kubernetes.io~projected/kube-api-access-lvnrf DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ddd0ca0bee2bbed601ee28c1df5999ea68981b20d1c0067b52437a2649e11aa/userdata/shm DeviceMajor:0 DeviceMinor:454 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8b906fc0-f2bf-4586-97e6-921bbd467b65/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:499 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f88c2a18-11f5-45ef-aff1-3c5976716d85/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:809 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-568 DeviceMajor:0 DeviceMinor:568 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1084562a-20a0-432d-b739-90bc0a4daff2/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:814 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-988 DeviceMajor:0 DeviceMinor:988 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1166 DeviceMajor:0 DeviceMinor:1166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:431 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-617 DeviceMajor:0 DeviceMinor:617 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-518 DeviceMajor:0 DeviceMinor:518 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-481 DeviceMajor:0 DeviceMinor:481 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~projected/kube-api-access-cxv6v DeviceMajor:0 DeviceMinor:228 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0945a421-d7c4-46df-b3d9-507443627d51/volumes/kubernetes.io~projected/kube-api-access-k29kr DeviceMajor:0 DeviceMinor:339 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1235 DeviceMajor:0 DeviceMinor:1235 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~projected/kube-api-access-p5dk8 DeviceMajor:0 DeviceMinor:233 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/13ead1a9d130e4cdb9a3e1038d5bbe3813860bfedd951bc71fd7108de36c6c88/userdata/shm DeviceMajor:0 DeviceMinor:444 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d7c06dbc8e2f887f2a21bc3e179a21693ddc1835812120917fd3ac94d4f0ff2/userdata/shm DeviceMajor:0 DeviceMinor:589 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9fee5c93850116cedccb29b440cbb9d64b2e4cc6c4a2b7baa36f936fc07adce9/userdata/shm DeviceMajor:0 DeviceMinor:364 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:206 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ee46779ae89b4ca2573c0db3f08f40bcd1f36bd939f6b097aaa8ab0676c68690/userdata/shm DeviceMajor:0 DeviceMinor:250 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-542 DeviceMajor:0 DeviceMinor:542 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1164 DeviceMajor:0 DeviceMinor:1164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~projected/kube-api-access-lhzg4 DeviceMajor:0 DeviceMinor:240 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/543fb2147aca575376ed7bd211cfca3f8a0e31f62df5e58bf47f4f7fc11fc303/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/adda5560398a1e9cd1248ce8d3ae8608ee224ce0ee349c65f7682b313879aa78/userdata/shm DeviceMajor:0 DeviceMinor:1162 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a70d40880058e84142e4d02963e7aba37e4a753a42ab982dbb781aba6c1199ec/userdata/shm DeviceMajor:0 DeviceMinor:825 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1038 DeviceMajor:0 DeviceMinor:1038 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:572 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b0f77d68-f228-4f82-befb-fb2a2ce2e976/volumes/kubernetes.io~projected/kube-api-access-t77j8 DeviceMajor:0 DeviceMinor:567 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1032 DeviceMajor:0 DeviceMinor:1032 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd0e307b59dcdef36339f9469bcea9ae60dc835b43a1e8b7190883e66520e662/userdata/shm DeviceMajor:0 DeviceMinor:384 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f69a00b6-d908-4485-bb0d-57594fc01d24/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:585 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1086 DeviceMajor:0 DeviceMinor:1086 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-333 DeviceMajor:0 DeviceMinor:333 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-436 DeviceMajor:0 DeviceMinor:436 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/61f6b81b92e4d6e8441e143173fb9e75d890f0b6176d5db04fc0f47c9e7e489a/userdata/shm DeviceMajor:0 DeviceMinor:591 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1b4d46c0a582fa8416fadc519a245d9a05f81263579189dfddab63cae5612499/userdata/shm DeviceMajor:0 DeviceMinor:514 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-785 DeviceMajor:0 DeviceMinor:785 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a4b6c9bb5e1aa6ddb46f2ece42f31a363d888ffb22d8e2d50941005d7a91173e/userdata/shm DeviceMajor:0 DeviceMinor:806 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1131 DeviceMajor:0 DeviceMinor:1131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-488 DeviceMajor:0 DeviceMinor:488 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-42 DeviceMajor:0 DeviceMinor:42 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/00431ec658bea7a97a4c1df198c67f87ad4685fb77cc89ae90150ff213743316/userdata/shm DeviceMajor:0 DeviceMinor:578 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:210 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84a3629f241ccd15c8649ba629b3be31e2785a3b2224bbe09e95e6dbad4b5613/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6a6a616d-012a-479e-ab3d-b21295ea1805/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:231 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:1206 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1177 DeviceMajor:0 DeviceMinor:1177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1cb8ab19-0564-4182-a7e3-0943c1480663/volumes/kubernetes.io~projected/kube-api-access-4v8jq DeviceMajor:0 DeviceMinor:1100 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/volumes/kubernetes.io~projected/kube-api-access-d89r9 DeviceMajor:0 DeviceMinor:800 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/03de1ea6-da57-4e13-8e5a-d5e10a9f9957/volumes/kubernetes.io~projected/kube-api-access-hcj8f DeviceMajor:0 DeviceMinor:105 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b0f77d68-f228-4f82-befb-fb2a2ce2e976/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:566 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-442 DeviceMajor:0 DeviceMinor:442 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29490aed-9c97-42d1-94c8-44d1de13b70c/volumes/kubernetes.io~projected/kube-api-access-257hk DeviceMajor:0 DeviceMinor:811 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-781 DeviceMajor:0 DeviceMinor:781 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae/volumes/kubernetes.io~projected/kube-api-access-hww8g DeviceMajor:0 DeviceMinor:378 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29490aed-9c97-42d1-94c8-44d1de13b70c/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:797 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-839 DeviceMajor:0 DeviceMinor:839 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bdf80ddc-7c99-4f60-814b-ba98809ef41d/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:865 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes/kubernetes.io~projected/kube-api-access-fqx6m DeviceMajor:0 DeviceMinor:1161 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~projected/kube-api-access-gmffc DeviceMajor:0 DeviceMinor:136 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0d84a97391b20bbc1473efdc91b70735c4232a35d2754651bb0243ebf80ab3be/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bdf80ddc-7c99-4f60-814b-ba98809ef41d/volumes/kubernetes.io~projected/kube-api-access-bql7p DeviceMajor:0 DeviceMinor:834 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-665 DeviceMajor:0 DeviceMinor:665 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/04e5c67b5ae79340d56b2dfa469c98052472e299f29df84c3890635b1574d4c0/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:1207 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/aa4cba67-b5d4-46c2-8cad-1a1379f764cb/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:1204 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/caec44dc-aab7-4407-b34a-52bbe4b4f635/volumes/kubernetes.io~projected/kube-api-access-xml27 DeviceMajor:0 DeviceMinor:801 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1084562a-20a0-432d-b739-90bc0a4daff2/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:816 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e277fb0b84dd045eb44f5a8337ca7f75f6577ad5f14ee5eacb1c176f0cf83dfa/userdata/shm DeviceMajor:0 DeviceMinor:913 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-425 DeviceMajor:0 DeviceMinor:425 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-961 DeviceMajor:0 DeviceMinor:961 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0442ec6c-5973-40a5-a0c3-dc02de46d343/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:581 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1021 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1056 DeviceMajor:0 DeviceMinor:1056 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1108 DeviceMajor:0 DeviceMinor:1108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1009 DeviceMajor:0 DeviceMinor:1009 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0a14d09c0c63bc07a9e3f986358b6bbfe11d33fdfadd6b5aba6cb62ef0a527b0/userdata/shm DeviceMajor:0 DeviceMinor:513 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-919 DeviceMajor:0 DeviceMinor:919 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-771 DeviceMajor:0 DeviceMinor:771 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f69a00b6-d908-4485-bb0d-57594fc01d24/volumes/kubernetes.io~projected/kube-api-access-5r7qd DeviceMajor:0 DeviceMinor:244 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~projected/kube-api-access-g6bvr DeviceMajor:0 DeviceMinor:237 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850/volumes/kubernetes.io~projected/kube-api-access-jmnjp DeviceMajor:0 DeviceMinor:798 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1223 DeviceMajor:0 DeviceMinor:1223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8cb5158f-2199-42c0-995a-8490c9ec8a95/volumes/kubernetes.io~projected/kube-api-access-p2chb DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-892 DeviceMajor:0 DeviceMinor:892 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1150 DeviceMajor:0 DeviceMinor:1150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7f312c72332d1eca8944cf91ca9c1d896c13f62ea944da320c89182c0dd4ab06/userdata/shm DeviceMajor:0 DeviceMinor:396 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c2635254-a491-42e5-b598-461c24bf77ca/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:432 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-958 DeviceMajor:0 DeviceMinor:958 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9cfd2323-c33a-4d80-9c25-710920c0e605/volumes/kubernetes.io~projected/kube-api-access-blfkg DeviceMajor:0 DeviceMinor:1079 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3fdec4aed0d4d1e92fcea54e18530bddc4ceb0a577b38a5b2728e046e7e0d8a1/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1106 DeviceMajor:0 DeviceMinor:1106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1cb8ab19-0564-4182-a7e3-0943c1480663/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1098 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/296c63b9a082d2c4952a03261f6f9af Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: d9282d74bb23ca7de387e35c413bd5177/userdata/shm DeviceMajor:0 DeviceMinor:657 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1213 DeviceMajor:0 DeviceMinor:1213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:506 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cc6e82f62809390e77afef9a24511f8204b584c9c34f5174bf13a9f3c743fa58/userdata/shm DeviceMajor:0 DeviceMinor:511 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-828 DeviceMajor:0 DeviceMinor:828 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-556 DeviceMajor:0 DeviceMinor:556 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2cf1bdb8eb09b95692725959e60306272582dc358e1d2a541fe6b5b5e57971c0/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/983b16a4206de1f333a12de0d35d4c31d9a34f31d59c85ba786062d1421c921f/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/932a70df-3afe-4873-9449-ab6e061d3fe3/volumes/kubernetes.io~projected/kube-api-access-fv8x5 DeviceMajor:0 DeviceMinor:383 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99f1238675e89d202ac72814030597ebf2c78d75d8dce9d24566f86cd13b327c/userdata/shm DeviceMajor:0 DeviceMinor:888 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-619 DeviceMajor:0 DeviceMinor:619 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-878 DeviceMajor:0 DeviceMinor:878 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d/volumes/kubernetes.io~projected/kube-api-access-gpk5h DeviceMajor:0 DeviceMinor:359 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/02d02240944e9230fa342b4b1030eceabc9b6ad789e1383eef1d657905cf15af/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-630 DeviceMajor:0 DeviceMinor:630 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1135 DeviceMajor:0 DeviceMinor:1135 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-57 DeviceMajor:0 DeviceMinor:57 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0c606c4f78cc5d83eabd2765b617ca07a15da7eb4ca4b85bfad4f7028933f81f/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/74476be5-669a-4737-b93b-c4870423a4da/volumes/kubernetes.io~projected/kube-api-access-nvx6m DeviceMajor:0 DeviceMinor:1024 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-59 DeviceMajor:0 DeviceMinor:59 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-93 DeviceMajor:0 DeviceMinor:93 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7/volumes/kubernetes.io~projected/kube-api-access-wzzjs DeviceMajor:0 DeviceMinor:340 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/860dad91b3226c9023c3b60395b0ad953648fc93c4b425a376a5054813858ced/userdata/shm DeviceMajor:0 DeviceMinor:593 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-666 DeviceMajor:0 DeviceMinor:666 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-955 DeviceMajor:0 DeviceMinor:955 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a356215383e4477cfa420d0c3e3c8a05dd9f0afe4bf19cf96af5611814ab90a/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0d72e695-0183-4ee8-8add-5425e67f7138/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9a9d18e78a09ff29603fbd5fc9e03f2d3a2eb3c0cb4954994f17a7962e1ccc72/userdata/shm DeviceMajor:0 DeviceMinor:596 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b9c87410-8689-4884-b5a8-df3ecbb7f1a4/volumes/kubernetes.io~projected/kube-api-access-l5j9d DeviceMajor:0 DeviceMinor:338 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9ecbe775d85b5008c6adeeb8170b86d61ae88bf900fcd70723b66300a47bcaec/userdata/shm DeviceMajor:0 DeviceMinor:817 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e5e0836f-c0b4-40cd-9f63-55774da2740e/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:908 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d26036f1-bdce-4ec5-873f-962fa7e8e6c1/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da04c6fa-4916-4bed-a6b2-cc92bf2ee379/volumes/kubernetes.io~projected/kube-api-access-vq4rm DeviceMajor:0 DeviceMinor:493 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/volumes/kubernetes.io~projected/kube-api-access-kxl7x DeviceMajor:0 DeviceMinor:577 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-394 DeviceMajor:0 DeviceMinor:394 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f88c2a18-11f5-45ef-aff1-3c5976716d85/volumes/kubernetes.io~projected/kube-api-access-scz6j DeviceMajor:0 DeviceMinor:812 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d9a9cd3f2878ec84a255f5f74dc3526f3a1623550d44547c9ce47a07a51bb959/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8e812dd9-cd05-4e9e-8710-d0920181ece2/volumes/kubernetes.io~projected/kube-api-access-s54f9 DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c669ea9b66a51273cf2d30ced0d0c7e6bfc9166bf41cddcbf86ac434cad57ea6/userdata/shm DeviceMajor:0 DeviceMinor:379 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-554 DeviceMajor:0 DeviceMinor:554 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-647 DeviceMajor:0 DeviceMinor:647 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9f5c64aa-676e-4e48-b714-02f6edb1d361/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:808 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1084562a-20a0-432d-b739-90bc0a4daff2/volumes/kubernetes.io~projected/kube-api-access-qmsjt DeviceMajor:0 DeviceMinor:818 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-898 DeviceMajor:0 DeviceMinor:898 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6948f93-b573-4f09-b754-aaa2269e2875/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:580 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe35b5f7a2da5ebf4bbbee570d091e9d7b1840cb3252d65d0a8b082be7bbb647/userdata/shm DeviceMajor:0 DeviceMinor:1034 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/03c65d78c2c86aff78c560583deceefc749227ea76cab522d93c1dd2064cc015/userdata/shm DeviceMajor:0 DeviceMinor:520 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/volumes/kubernetes.io~projected/kube-api-access-9fjk8 DeviceMajor:0 DeviceMinor:238 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-659 DeviceMajor:0 DeviceMinor:659 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/196e7607-1ddf-467b-9901-b4be746130a1/volumes/kubernetes.io~projected/kube-api-access-l4g9s DeviceMajor:0 DeviceMinor:1067 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fc70fe385192b60cb00cc2ccd1eb9ea175a5eff153501a735cc786b1100d45a8/userdata/shm DeviceMajor:0 DeviceMinor:1117 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9ccdc221-4ec5-487e-8ec4-85284ed628d8/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:85 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-928 DeviceMajor:0 DeviceMinor:928 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1122 DeviceMajor:0 DeviceMinor:1122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b738d6ab8a2079028f3f1e5804df92e50d8884090bb1653ec14e4d63a6afccd/userdata/shm DeviceMajor:0 DeviceMinor:361 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fd600b9af2d2390bce62bac606740fc4a23373db916a45bc5361be1ed164fee1/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/db52ca42-e458-407f-9eeb-bf6de6405edc/volumes/kubernetes.io~projected/kube-api-access-jx9p2 DeviceMajor:0 DeviceMinor:239 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-141 DeviceMajor:0 DeviceMinor:141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/432f611b-a1a2-4cc9-b005-17a16413d281/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:465 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-462 DeviceMajor:0 DeviceMinor:462 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-772 DeviceMajor:0 DeviceMinor:772 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec53d7fa-445b-4e1d-84ef-545f08e80ccc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0999f781-3299-4cb6-ba76-2a4f4584c685/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da42cce599588e6c99d4cd2839a25bf8a6c6ba9dc794e5b75cfaceda627f492b/userdata/shm DeviceMajor:0 DeviceMinor:822 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f875878f-3588-42f1-9488-750d9f4582f8/volumes/kubernetes.io~projected/kube-api-access-nn7zt DeviceMajor:0 DeviceMinor:1220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2cf9d5a318f253e886267d57345deb8cc4469309552817e3d629697b159e40e7/userdata/shm DeviceMajor:0 DeviceMinor:600 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/db376fea-5756-4bc2-9685-f32730b5a6f7/volumes/kubernetes.io~projected/kube-api-access-r6qn5 DeviceMajor:0 DeviceMinor:332 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-939 DeviceMajor:0 DeviceMinor:939 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-766 DeviceMajor:0 DeviceMinor:766 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f11956d88039b0b64ae7a326d73a1a29f38de2a62777ca3d744161f04878819/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-952 DeviceMajor:0 DeviceMinor:952 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1070 DeviceMajor:0 DeviceMinor:1070 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:341 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-707 DeviceMajor:0 DeviceMinor:707 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb35841e-d992-4044-aaaa-06c9faf47bd0/volumes/kubernetes.io~projected/kube-api-access-zlxfz DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43d54514-989c-4c82-93f9-153b44eacdd1/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1019 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/af1bbeee-1faf-43d1-943f-ee5319cef4e9/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1097 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3acdf5b69c1ce66294030ac402e9c8e09366d47522c5ff94a22e2363f49e4024/userdata/shm DeviceMajor:0 DeviceMinor:782 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/71755097-7543-48f8-8925-0e21650bf8f6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:810 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1225 DeviceMajor:0 DeviceMinor:1225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1036 DeviceMajor:0 DeviceMinor:1036 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0c7b317c-d141-4e69-9c82-4a5dda6c3248/volumes/kubernetes.io~projected/kube-api-access-549bq DeviceMajor:0 DeviceMinor:509 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-733 DeviceMajor:0 DeviceMinor:733 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d0605021-862d-424a-a4c1-037fb005b77e/volumes/kubernetes.io~projected/kube-api-access-cxj5c DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/196e7607-1ddf-467b-9901-b4be746130a1/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1059 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d7d862ef1259d0f32a24b080a794c178935b4f82b34bd652442b355adbe27b4c/userdata/shm DeviceMajor:0 DeviceMinor:841 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/22baed4d026a2e73b0585b205810980acb867f841d04f3d6a690f1122607e415/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:336 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:576 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-615 DeviceMajor:0 DeviceMinor:615 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-843 DeviceMajor:0 DeviceMinor:843 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/af1bbeee-1faf-43d1-943f-ee5319cef4e9/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1092 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0442ec6c-5973-40a5-a0c3-dc02de46d343/volumes/kubernetes.io~projected/kube-api-access-5x6ht DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-570 DeviceMajor:0 DeviceMinor:570 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d014721-ed53-447a-b737-c496bbba18be/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:886 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2d014721-ed53-447a-b737-c496bbba18be/volumes/kubernetes.io~projected/kube-api-access-4btrk DeviceMajor:0 DeviceMinor:887 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad4aa30-f7d5-47ca-b01e-2643f7195685/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:795 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9dc4baf2ee903f66ceacf214f401bab7bc4c01b6dec665d83f3584b31ae00f41/userdata/shm DeviceMajor:0 DeviceMinor:64 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-726 DeviceMajor:0 DeviceMinor:726 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/kube-api-access-tb7tz DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc/volumes/kubernetes.io~projected/kube-api-access-59hld DeviceMajor:0 DeviceMinor:501 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-130 DeviceMajor:0 DeviceMinor:130 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f076eaf0-b041-4db0-ba06-3d85e23bb654/volumes/kubernetes.io~projected/kube-api-access-f25pg DeviceMajor:0 DeviceMinor:248 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-461 DeviceMajor:0 DeviceMinor:461 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8e83e941e1bb6d2e2e4ed50989f8c4a7c436dc56c6018257d976ac9218210eba/userdata/shm DeviceMajor:0 DeviceMinor:1003 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e6eabf2087e36d3613240f79a61ceca615c772d05baa285322d88bd80a44773/userdata/shm DeviceMajor:0 DeviceMinor:1028 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-405 DeviceMajor:0 DeviceMinor:405 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b37b06bafa3fe7617d0c4d370f2bc9e1e4e31111091703de1b10d8a3711bfba/userdata/shm DeviceMajor:0 DeviceMinor:605 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-867 DeviceMajor:0 DeviceMinor:867 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/582d2ba8-1210-47d0-a530-0b20b2fdde22/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1020 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dfd0e7e42052e04911701599adae500aa7e091be93bca4bd99512045dd966402/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/db52ca42-e458-407f-9eeb-bf6de6405edc/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:582 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-471 DeviceMajor:0 DeviceMinor:471 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bdf80ddc-7c99-4f60-814b-ba98809ef41d/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:866 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-496 DeviceMajor:0 DeviceMinor:496 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a078565a-6970-4f42-84f4-938f1d637245/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cc949f0d8f85c68fa457f1194d4c5e8aa9bf8a96548dfb4976d04f8be5a7a9b6/userdata/shm DeviceMajor:0 DeviceMinor:352 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-896 DeviceMajor:0 DeviceMinor:896 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ac57b9f21c66b05de1907050080a6922bfb455574d5cf2698b6bd4c95c6df165/userdata/shm DeviceMajor:0 DeviceMinor:1026 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-757 DeviceMajor:0 DeviceMinor:757 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da04c6fa-4916-4bed-a6b2-cc92bf2ee379/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:502 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-139 DeviceMajor:0 DeviceMinor:139 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ee99294-4785-49d0-b493-0d734cf09396/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:246 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-622 DeviceMajor:0 DeviceMinor:622 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-778 DeviceMajor:0 DeviceMinor:778 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb942756-bac7-414d-b179-cebdce588a13/volumes/kubernetes.io~projected/kube-api-access-2ktpl DeviceMajor:0 DeviceMinor:147 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-527 DeviceMajor:0 DeviceMinor:527 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9fc664ff-2e8f-441d-82dc-8f21c1d362d7/volumes/kubernetes.io~projected/kube-api-access-x46bf DeviceMajor:0 DeviceMinor:358 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b58497ff3c8993b13d6f045f9b3aa17b9b5e464305fd642acb69bc40d01db14a/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-549 DeviceMajor:0 DeviceMinor:549 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-261 DeviceMajor:0 DeviceMinor:261 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1116 DeviceMajor:0 DeviceMinor:1116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:127 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b6948f93-b573-4f09-b754-aaa2269e2875/volumes/kubernetes.io~projected/kube-api-access-t2g9q DeviceMajor:0 DeviceMinor:603 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:00431ec658bea7a MacAddress:32:9f:15:40:37:05 Speed:10000 Mtu:8900} {Name:02d02240944e923 MacAddress:b2:6b:74:54:d9:ec Speed:10000 Mtu:8900} {Name:03c65d78c2c86af MacAddress:1e:ff:b0:2e:d4:51 Speed:10000 Mtu:8900} {Name:04e5c67b5ae7934 MacAddress:2a:a0:44:95:32:5b Speed:10000 Mtu:8900} {Name:0a14d09c0c63bc0 MacAddress:fa:ee:db:28:73:29 Speed:10000 Mtu:8900} {Name:0a709f6a031857e MacAddress:0e:6e:2f:71:6c:4e Speed:10000 Mtu:8900} {Name:0ab9786ebf50a65 MacAddress:82:13:5f:8c:3f:01 Speed:10000 Mtu:8900} {Name:0d84a97391b20bb MacAddress:66:d1:0c:98:39:7e Speed:10000 Mtu:8900} {Name:13ead1a9d130e4c MacAddress:12:0b:b7:0e:d6:a4 Speed:10000 Mtu:8900} {Name:15afbeaf2b91c3d MacAddress:2a:93:30:5d:84:57 Speed:10000 Mtu:8900} {Name:1b4d46c0a582fa8 MacAddress:92:cb:1f:8b:27:37 Speed:10000 Mtu:8900} {Name:1d7c06dbc8e2f88 MacAddress:02:1a:a6:17:79:81 Speed:10000 Mtu:8900} {Name:1ddd0ca0bee2bbe MacAddress:2e:40:e8:81:3b:3d Speed:10000 Mtu:8900} {Name:2108f9b19bef723 MacAddress:f2:81:6f:e2:9a:ba Speed:10000 Mtu:8900} {Name:22baed4d026a2e7 MacAddress:fa:b8:21:d5:56:4e Speed:10000 Mtu:8900} {Name:274e9b834559b12 MacAddress:62:4a:0b:db:0a:30 Speed:10000 Mtu:8900} {Name:2b738d6ab8a2079 MacAddress:52:98:db:c0:97:d9 Speed:10000 Mtu:8900} {Name:2cf1bdb8eb09b95 MacAddress:ee:dc:0f:bf:51:b5 Speed:10000 Mtu:8900} {Name:2cf9d5a318f253e MacAddress:16:d9:fd:8a:90:6a Speed:10000 Mtu:8900} {Name:2e6eabf2087e36d MacAddress:fe:1a:25:31:73:59 Speed:10000 Mtu:8900} {Name:3fdec4aed0d4d1e MacAddress:1e:18:88:c7:09:b3 Speed:10000 Mtu:8900} {Name:481a20c56b1513a MacAddress:a2:be:62:13:1f:14 Speed:10000 Mtu:8900} {Name:543fb2147aca575 MacAddress:5e:b1:db:d5:47:9e Speed:10000 Mtu:8900} {Name:5f264524ff79429 MacAddress:0a:99:12:98:b1:56 Speed:10000 Mtu:8900} {Name:613533c3a19224e MacAddress:c2:9e:38:85:c9:02 Speed:10000 Mtu:8900} {Name:61f6b81b92e4d6e MacAddress:92:0d:73:98:d8:06 Speed:10000 Mtu:8900} {Name:6b37b06bafa3fe7 MacAddress:2e:96:f5:b7:59:ff Speed:10000 Mtu:8900} {Name:7483df25713a00b MacAddress:de:8d:96:12:96:e1 Speed:10000 Mtu:8900} {Name:7f312c72332d1ec MacAddress:5a:68:3f:a6:40:01 Speed:10000 Mtu:8900} {Name:84a3629f241ccd1 MacAddress:82:5a:97:c8:d9:61 Speed:10000 Mtu:8900} {Name:860dad91b3226c9 MacAddress:8e:a1:04:62:4f:12 Speed:10000 Mtu:8900} {Name:8e83e941e1bb6d2 MacAddress:ea:02:fa:7d:e4:60 Speed:10000 Mtu:8900} {Name:8f11956d88039b0 MacAddress:8e:c4:fa:f0:7b:a2 Speed:10000 Mtu:8900} {Name:94d378b5868ac49 MacAddress:a6:a7:99:da:f2:fd Speed:10000 Mtu:8900} {Name:983b16a4206de1f MacAddress:56:6a:8d:7d:81:45 Speed:10000 Mtu:8900} {Name:99f1238675e89d2 MacAddress:1a:3a:bd:a9:2a:c3 Speed:10000 Mtu:8900} {Name:9a9d18e78a09ff2 MacAddress:9e:84:d8:6f:ae:8b Speed:10000 Mtu:8900} {Name:9c1ce07b6c7993e MacAddress:ba:05:8b:1c:24:dc Speed:10000 Mtu:8900} {Name:9ecbe775d85b500 MacAddress:e2:d4:b7:49:7d:80 Speed:10000 Mtu:8900} {Name:9fee5c93850116c MacAddress:36:ef:d1:9c:9c:1c Speed:10000 Mtu:8900} {Name:a0f6a23031d9623 MacAddress:a2:14:43:e7:30:80 Speed:10000 Mtu:8900} {Name:a4b6c9bb5e1aa6d MacAddress:4a:ba:c9:05:3f:39 Speed:10000 Mtu:8900} {Name:a62338b3d8b6fef MacAddress:9e:47:70:e7:92:d3 Speed:10000 Mtu:8900} {Name:a8685da7c022ead MacAddress:66:7b:9a:18:e4:61 Speed:10000 Mtu:8900} {Name:ac57b9f21c66b05 MacAddress:3e:85:11:e1:04:5e Speed:10000 Mtu:8900} {Name:adda5560398a1e9 MacAddress:ee:ba:d6:e3:0e:cd Speed:10000 Mtu:8900} {Name:b588169f9714563 MacAddress:96:7a:8b:f7:5d:9d Speed:10000 Mtu:8900} {Name:b7ca349d109c7ce MacAddress:ca:1f:0d:48:27:4f Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:4e:a6:8c:3a:8f:7c Speed:0 Mtu:8900} {Name:c669ea9b66a5127 MacAddress:5e:95:31:cb:66:e2 Speed:10000 Mtu:8900} {Name:cc949f0d8f85c68 MacAddress:96:d6:46:59:ab:b8 Speed:10000 Mtu:8900} {Name:d7d862ef1259d0f MacAddress:9e:c0:50:cb:a5:bd Speed:10000 Mtu:8900} {Name:da42cce599588e6 MacAddress:d2:6c:24:9c:c8:84 Speed:10000 Mtu:8900} {Name:dc23eb8c4f8df61 MacAddress:0e:d5:eb:f4:07:c5 Speed:10000 Mtu:8900} {Name:dd0e307b59dcdef MacAddress:42:a9:19:cd:0b:eb Speed:10000 Mtu:8900} {Name:e702a6208830f57 MacAddress:86:84:df:50:5a:ff Speed:10000 Mtu:8900} {Name:ee46779ae89b4ca MacAddress:76:2c:03:58:1c:a7 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:50:e9:f6 Speed:-1 Mtu:9000} {Name:fe35b5f7a2da5eb MacAddress:f2:05:3c:17:65:4b Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:26:48:4a:2c:71:6e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.086466 30420 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.086522 30420 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.086698 30420 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.086846 30420 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.086873 30420 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087089 30420 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087099 30420 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087111 30420 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087130 30420 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087173 30420 state_mem.go:36] "Initialized new in-memory state store" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087282 30420 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087341 30420 kubelet.go:418] "Attempting to sync node with API server" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087352 30420 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087365 30420 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087375 30420 kubelet.go:324] "Adding apiserver pod source" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.087384 30420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 10:10:36.090576 master-0 kubenswrapper[30420]: I0318 10:10:36.090183 30420 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.090674 30420 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091115 30420 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091270 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091288 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091295 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091303 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091318 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091325 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091339 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091345 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091354 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091361 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091372 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091388 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 10:10:36.091634 master-0 kubenswrapper[30420]: I0318 10:10:36.091414 30420 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 10:10:36.092267 master-0 kubenswrapper[30420]: I0318 10:10:36.091939 30420 server.go:1280] "Started kubelet" Mar 18 10:10:36.092555 master-0 kubenswrapper[30420]: I0318 10:10:36.092497 30420 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 10:10:36.093761 master-0 kubenswrapper[30420]: I0318 10:10:36.093161 30420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 10:10:36.093761 master-0 kubenswrapper[30420]: I0318 10:10:36.093590 30420 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 10:10:36.093591 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 10:10:36.094852 master-0 kubenswrapper[30420]: I0318 10:10:36.094809 30420 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 10:10:36.097022 master-0 kubenswrapper[30420]: I0318 10:10:36.096976 30420 server.go:449] "Adding debug handlers to kubelet server" Mar 18 10:10:36.110790 master-0 kubenswrapper[30420]: I0318 10:10:36.108502 30420 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 10:10:36.110790 master-0 kubenswrapper[30420]: I0318 10:10:36.108540 30420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 10:10:36.110790 master-0 kubenswrapper[30420]: I0318 10:10:36.108907 30420 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 09:43:17 +0000 UTC, rotation deadline is 2026-03-19 05:02:37.825908428 +0000 UTC Mar 18 10:10:36.110790 master-0 kubenswrapper[30420]: I0318 10:10:36.108969 30420 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h52m1.716941895s for next certificate rotation Mar 18 10:10:36.110790 master-0 kubenswrapper[30420]: E0318 10:10:36.110285 30420 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 10:10:36.110790 master-0 kubenswrapper[30420]: I0318 10:10:36.110447 30420 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 10:10:36.110790 master-0 kubenswrapper[30420]: I0318 10:10:36.110465 30420 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 10:10:36.110790 master-0 kubenswrapper[30420]: I0318 10:10:36.110548 30420 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120517 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f266bad-8b30-4300-ad93-9d48e61f2440" volumeName="kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120580 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c7b317c-d141-4e69-9c82-4a5dda6c3248" volumeName="kubernetes.io/projected/0c7b317c-d141-4e69-9c82-4a5dda6c3248-kube-api-access-549bq" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120595 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" volumeName="kubernetes.io/secret/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120607 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1bbeee-1faf-43d1-943f-ee5319cef4e9" volumeName="kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120619 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da04c6fa-4916-4bed-a6b2-cc92bf2ee379" volumeName="kubernetes.io/configmap/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-config-volume" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120630 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c7b317c-d141-4e69-9c82-4a5dda6c3248" volumeName="kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-image-import-ca" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120641 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" volumeName="kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120910 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71755097-7543-48f8-8925-0e21650bf8f6" volumeName="kubernetes.io/empty-dir/71755097-7543-48f8-8925-0e21650bf8f6-snapshots" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120948 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da04c6fa-4916-4bed-a6b2-cc92bf2ee379" volumeName="kubernetes.io/secret/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-metrics-tls" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120971 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c7b317c-d141-4e69-9c82-4a5dda6c3248" volumeName="kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-serving-ca" seLinuxMountContext="" Mar 18 10:10:36.128860 master-0 kubenswrapper[30420]: I0318 10:10:36.120984 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" volumeName="kubernetes.io/configmap/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132116 30420 factory.go:55] Registering systemd factory Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132152 30420 factory.go:221] Registration of the systemd container factory successfully Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132219 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43d54514-989c-4c82-93f9-153b44eacdd1" volumeName="kubernetes.io/projected/43d54514-989c-4c82-93f9-153b44eacdd1-kube-api-access-z459j" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132280 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cfd2323-c33a-4d80-9c25-710920c0e605" volumeName="kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132332 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-env-overrides" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132360 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" volumeName="kubernetes.io/projected/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-kube-api-access-x46bf" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132449 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5e0836f-c0b4-40cd-9f63-55774da2740e" volumeName="kubernetes.io/configmap/e5e0836f-c0b4-40cd-9f63-55774da2740e-mcd-auth-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132471 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03de1ea6-da57-4e13-8e5a-d5e10a9f9957" volumeName="kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-daemon-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132499 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cb8ab19-0564-4182-a7e3-0943c1480663" volumeName="kubernetes.io/empty-dir/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-textfile" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132515 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29490aed-9c97-42d1-94c8-44d1de13b70c" volumeName="kubernetes.io/secret/29490aed-9c97-42d1-94c8-44d1de13b70c-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132531 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ee99294-4785-49d0-b493-0d734cf09396" volumeName="kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-bound-sa-token" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132555 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f77d68-f228-4f82-befb-fb2a2ce2e976" volumeName="kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-tmp" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132588 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdf80ddc-7c99-4f60-814b-ba98809ef41d" volumeName="kubernetes.io/projected/bdf80ddc-7c99-4f60-814b-ba98809ef41d-kube-api-access-bql7p" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132604 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d72e695-0183-4ee8-8add-5425e67f7138" volumeName="kubernetes.io/configmap/0d72e695-0183-4ee8-8add-5425e67f7138-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132636 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="196e7607-1ddf-467b-9901-b4be746130a1" volumeName="kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-node-bootstrap-token" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132650 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad4aa30-f7d5-47ca-b01e-2643f7195685" volumeName="kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132673 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d014721-ed53-447a-b737-c496bbba18be" volumeName="kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-images" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132694 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8641c1d1-dd79-4f1f-9343-52d1ee6faf9f" volumeName="kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-auth-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132718 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b906fc0-f2bf-4586-97e6-921bbd467b65" volumeName="kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132738 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa4cba67-b5d4-46c2-8cad-1a1379f764cb" volumeName="kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-federate-client-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132758 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/projected/f076eaf0-b041-4db0-ba06-3d85e23bb654-kube-api-access-f25pg" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132773 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d72e695-0183-4ee8-8add-5425e67f7138" volumeName="kubernetes.io/projected/0d72e695-0183-4ee8-8add-5425e67f7138-kube-api-access-g6bvr" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132794 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9c87410-8689-4884-b5a8-df3ecbb7f1a4" volumeName="kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-catalog-content" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132809 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb942756-bac7-414d-b179-cebdce588a13" volumeName="kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132847 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2635254-a491-42e5-b598-461c24bf77ca" volumeName="kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132905 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" volumeName="kubernetes.io/secret/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132921 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cb8ab19-0564-4182-a7e3-0943c1480663" volumeName="kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132944 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" volumeName="kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132958 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" volumeName="kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132976 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1bbeee-1faf-43d1-943f-ee5319cef4e9" volumeName="kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.132996 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f69a00b6-d908-4485-bb0d-57594fc01d24" volumeName="kubernetes.io/projected/f69a00b6-d908-4485-bb0d-57594fc01d24-kube-api-access-5r7qd" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133011 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa4cba67-b5d4-46c2-8cad-1a1379f764cb" volumeName="kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133032 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7" volumeName="kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-utilities" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133049 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7" volumeName="kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-catalog-content" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133063 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f266bad-8b30-4300-ad93-9d48e61f2440" volumeName="kubernetes.io/configmap/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133083 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" volumeName="kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133327 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cfd2323-c33a-4d80-9c25-710920c0e605" volumeName="kubernetes.io/configmap/9cfd2323-c33a-4d80-9c25-710920c0e605-metrics-client-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133453 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/projected/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-kube-api-access-gmffc" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133493 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4d2218c-f9df-4d43-8727-ed3a920e23f7" volumeName="kubernetes.io/projected/d4d2218c-f9df-4d43-8727-ed3a920e23f7-kube-api-access-w4qp9" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133517 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" volumeName="kubernetes.io/secret/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133565 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da04c6fa-4916-4bed-a6b2-cc92bf2ee379" volumeName="kubernetes.io/projected/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-kube-api-access-vq4rm" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133967 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" volumeName="kubernetes.io/empty-dir/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-audit-log" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.133990 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad4aa30-f7d5-47ca-b01e-2643f7195685" volumeName="kubernetes.io/secret/1ad4aa30-f7d5-47ca-b01e-2643f7195685-machine-approver-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134041 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b906fc0-f2bf-4586-97e6-921bbd467b65" volumeName="kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-encryption-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134067 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="accc57fb-75f5-4f89-9804-6ede7f77e27c" volumeName="kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-kube-api-access-nwfph" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134088 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdf80ddc-7c99-4f60-814b-ba98809ef41d" volumeName="kubernetes.io/empty-dir/bdf80ddc-7c99-4f60-814b-ba98809ef41d-tmpfs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134105 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" volumeName="kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134125 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43d54514-989c-4c82-93f9-153b44eacdd1" volumeName="kubernetes.io/configmap/43d54514-989c-4c82-93f9-153b44eacdd1-service-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134141 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6948f93-b573-4f09-b754-aaa2269e2875" volumeName="kubernetes.io/empty-dir/b6948f93-b573-4f09-b754-aaa2269e2875-cache" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134201 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0605021-862d-424a-a4c1-037fb005b77e" volumeName="kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-env-overrides" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134219 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cb5158f-2199-42c0-995a-8490c9ec8a95" volumeName="kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134233 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdf80ddc-7c99-4f60-814b-ba98809ef41d" volumeName="kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-webhook-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134254 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0945a421-d7c4-46df-b3d9-507443627d51" volumeName="kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-catalog-content" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134269 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cb8ab19-0564-4182-a7e3-0943c1480663" volumeName="kubernetes.io/configmap/1cb8ab19-0564-4182-a7e3-0943c1480663-metrics-client-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134288 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cb8ab19-0564-4182-a7e3-0943c1480663" volumeName="kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134303 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" volumeName="kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134326 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4d2218c-f9df-4d43-8727-ed3a920e23f7" volumeName="kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134377 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" volumeName="kubernetes.io/projected/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-kube-api-access-9fjk8" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134391 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74476be5-669a-4737-b93b-c4870423a4da" volumeName="kubernetes.io/secret/74476be5-669a-4737-b93b-c4870423a4da-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134432 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e812dd9-cd05-4e9e-8710-d0920181ece2" volumeName="kubernetes.io/projected/8e812dd9-cd05-4e9e-8710-d0920181ece2-kube-api-access-s54f9" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134449 30420 factory.go:153] Registering CRI-O factory Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134665 30420 factory.go:221] Registration of the crio container factory successfully Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.134479 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/secret/f076eaf0-b041-4db0-ba06-3d85e23bb654-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135133 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/secret/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135161 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f77d68-f228-4f82-befb-fb2a2ce2e976" volumeName="kubernetes.io/projected/b0f77d68-f228-4f82-befb-fb2a2ce2e976-kube-api-access-t77j8" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135174 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5900a401-21c2-47f0-a921-47c648da558d" volumeName="kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-metrics-client-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135184 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135198 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db52ca42-e458-407f-9eeb-bf6de6405edc" volumeName="kubernetes.io/projected/db52ca42-e458-407f-9eeb-bf6de6405edc-kube-api-access-jx9p2" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135209 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0" volumeName="kubernetes.io/projected/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-kube-api-access-gmxj9" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135221 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5900a401-21c2-47f0-a921-47c648da558d" volumeName="kubernetes.io/projected/5900a401-21c2-47f0-a921-47c648da558d-kube-api-access-qtnxf" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135233 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8641c1d1-dd79-4f1f-9343-52d1ee6faf9f" volumeName="kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-images" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135243 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135257 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3657106-1eea-4031-8c92-85ba6287b425" volumeName="kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135266 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2635254-a491-42e5-b598-461c24bf77ca" volumeName="kubernetes.io/projected/c2635254-a491-42e5-b598-461c24bf77ca-kube-api-access-p4hfd" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135278 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03de1ea6-da57-4e13-8e5a-d5e10a9f9957" volumeName="kubernetes.io/projected/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-kube-api-access-hcj8f" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135289 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" volumeName="kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135301 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="196e7607-1ddf-467b-9901-b4be746130a1" volumeName="kubernetes.io/projected/196e7607-1ddf-467b-9901-b4be746130a1-kube-api-access-l4g9s" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135313 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7" volumeName="kubernetes.io/projected/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-kube-api-access-wzzjs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135323 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43d54514-989c-4c82-93f9-153b44eacdd1" volumeName="kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-stats-auth" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135335 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71755097-7543-48f8-8925-0e21650bf8f6" volumeName="kubernetes.io/secret/71755097-7543-48f8-8925-0e21650bf8f6-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135354 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ccdc221-4ec5-487e-8ec4-85284ed628d8" volumeName="kubernetes.io/projected/9ccdc221-4ec5-487e-8ec4-85284ed628d8-kube-api-access-ghd2r" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135363 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa4cba67-b5d4-46c2-8cad-1a1379f764cb" volumeName="kubernetes.io/projected/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-kube-api-access-sxf74" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135375 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa4cba67-b5d4-46c2-8cad-1a1379f764cb" volumeName="kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-metrics-client-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135384 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c7b317c-d141-4e69-9c82-4a5dda6c3248" volumeName="kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135393 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad4aa30-f7d5-47ca-b01e-2643f7195685" volumeName="kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-auth-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135405 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91331360-dc70-45bb-a815-e00664bae6c4" volumeName="kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-binary-copy" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135414 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-script-lib" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135426 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d014721-ed53-447a-b737-c496bbba18be" volumeName="kubernetes.io/projected/2d014721-ed53-447a-b737-c496bbba18be-kube-api-access-4btrk" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135435 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" volumeName="kubernetes.io/configmap/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135444 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f5c64aa-676e-4e48-b714-02f6edb1d361" volumeName="kubernetes.io/projected/9f5c64aa-676e-4e48-b714-02f6edb1d361-kube-api-access-xttqt" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135456 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-etcd-client" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135467 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="caec44dc-aab7-4407-b34a-52bbe4b4f635" volumeName="kubernetes.io/configmap/caec44dc-aab7-4407-b34a-52bbe4b4f635-cco-trusted-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135481 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a" volumeName="kubernetes.io/secret/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-catalogserver-certs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135493 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c7b317c-d141-4e69-9c82-4a5dda6c3248" volumeName="kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135508 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29490aed-9c97-42d1-94c8-44d1de13b70c" volumeName="kubernetes.io/projected/29490aed-9c97-42d1-94c8-44d1de13b70c-kube-api-access-257hk" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135521 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0442ec6c-5973-40a5-a0c3-dc02de46d343" volumeName="kubernetes.io/projected/0442ec6c-5973-40a5-a0c3-dc02de46d343-kube-api-access-5x6ht" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135533 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71755097-7543-48f8-8925-0e21650bf8f6" volumeName="kubernetes.io/projected/71755097-7543-48f8-8925-0e21650bf8f6-kube-api-access-qvhfc" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135557 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ee99294-4785-49d0-b493-0d734cf09396" volumeName="kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-kube-api-access-tb7tz" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135575 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="caec44dc-aab7-4407-b34a-52bbe4b4f635" volumeName="kubernetes.io/projected/caec44dc-aab7-4407-b34a-52bbe4b4f635-kube-api-access-xml27" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135586 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88c2a18-11f5-45ef-aff1-3c5976716d85" volumeName="kubernetes.io/secret/f88c2a18-11f5-45ef-aff1-3c5976716d85-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135601 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b906fc0-f2bf-4586-97e6-921bbd467b65" volumeName="kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-client" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135612 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa4cba67-b5d4-46c2-8cad-1a1379f764cb" volumeName="kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-serving-certs-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135629 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee376320-9ca0-444d-ab37-9cbcb6729b11" volumeName="kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135643 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f69a00b6-d908-4485-bb0d-57594fc01d24" volumeName="kubernetes.io/configmap/f69a00b6-d908-4485-bb0d-57594fc01d24-telemetry-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135654 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="196e7607-1ddf-467b-9901-b4be746130a1" volumeName="kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-certs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135666 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91331360-dc70-45bb-a815-e00664bae6c4" volumeName="kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135680 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0f77d68-f228-4f82-befb-fb2a2ce2e976" volumeName="kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-tuned" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135689 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db376fea-5756-4bc2-9685-f32730b5a6f7" volumeName="kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-catalog-content" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135701 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5e0836f-c0b4-40cd-9f63-55774da2740e" volumeName="kubernetes.io/secret/e5e0836f-c0b4-40cd-9f63-55774da2740e-proxy-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135711 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a" volumeName="kubernetes.io/empty-dir/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-cache" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135722 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1084562a-20a0-432d-b739-90bc0a4daff2" volumeName="kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135735 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad4aa30-f7d5-47ca-b01e-2643f7195685" volumeName="kubernetes.io/projected/1ad4aa30-f7d5-47ca-b01e-2643f7195685-kube-api-access-fp8vt" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135745 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cb5158f-2199-42c0-995a-8490c9ec8a95" volumeName="kubernetes.io/projected/8cb5158f-2199-42c0-995a-8490c9ec8a95-kube-api-access-p2chb" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135778 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ee99294-4785-49d0-b493-0d734cf09396" volumeName="kubernetes.io/configmap/8ee99294-4785-49d0-b493-0d734cf09396-trusted-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135788 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91331360-dc70-45bb-a815-e00664bae6c4" volumeName="kubernetes.io/projected/91331360-dc70-45bb-a815-e00664bae6c4-kube-api-access-8w8sl" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135797 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1bbeee-1faf-43d1-943f-ee5319cef4e9" volumeName="kubernetes.io/configmap/af1bbeee-1faf-43d1-943f-ee5319cef4e9-metrics-client-ca" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135809 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6948f93-b573-4f09-b754-aaa2269e2875" volumeName="kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-ca-certs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135819 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9c87410-8689-4884-b5a8-df3ecbb7f1a4" volumeName="kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-utilities" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135849 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0999f781-3299-4cb6-ba76-2a4f4584c685" volumeName="kubernetes.io/projected/0999f781-3299-4cb6-ba76-2a4f4584c685-kube-api-access" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135859 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1bbeee-1faf-43d1-943f-ee5319cef4e9" volumeName="kubernetes.io/projected/af1bbeee-1faf-43d1-943f-ee5319cef4e9-kube-api-access-nkvcs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135870 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce65f61f-8e3a-47d5-ac12-ad4ab05d2850" volumeName="kubernetes.io/projected/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-kube-api-access-jmnjp" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135885 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db52ca42-e458-407f-9eeb-bf6de6405edc" volumeName="kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135899 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0442ec6c-5973-40a5-a0c3-dc02de46d343" volumeName="kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135912 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c7b317c-d141-4e69-9c82-4a5dda6c3248" volumeName="kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-trusted-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135928 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ea90fee-5b5e-4b59-bfc4-969ee8c7912e" volumeName="kubernetes.io/configmap/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-cabundle" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135942 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b906fc0-f2bf-4586-97e6-921bbd467b65" volumeName="kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-policies" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135962 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b906fc0-f2bf-4586-97e6-921bbd467b65" volumeName="kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-trusted-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135973 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="932a70df-3afe-4873-9449-ab6e061d3fe3" volumeName="kubernetes.io/projected/932a70df-3afe-4873-9449-ab6e061d3fe3-kube-api-access-fv8x5" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.135989 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" volumeName="kubernetes.io/projected/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-kube-api-access-gpk5h" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136007 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cb8ab19-0564-4182-a7e3-0943c1480663" volumeName="kubernetes.io/projected/1cb8ab19-0564-4182-a7e3-0943c1480663-kube-api-access-4v8jq" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136019 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29fbc78b-1887-40d4-8165-f0f7cc40b583" volumeName="kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136035 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" volumeName="kubernetes.io/projected/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-kube-api-access-p5dk8" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136046 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ea90fee-5b5e-4b59-bfc4-969ee8c7912e" volumeName="kubernetes.io/secret/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-key" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136057 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="432f611b-a1a2-4cc9-b005-17a16413d281" volumeName="kubernetes.io/secret/432f611b-a1a2-4cc9-b005-17a16413d281-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136074 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db376fea-5756-4bc2-9685-f32730b5a6f7" volumeName="kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-utilities" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136086 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88c2a18-11f5-45ef-aff1-3c5976716d85" volumeName="kubernetes.io/projected/f88c2a18-11f5-45ef-aff1-3c5976716d85-kube-api-access-scz6j" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136103 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1084562a-20a0-432d-b739-90bc0a4daff2" volumeName="kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136115 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ccdc221-4ec5-487e-8ec4-85284ed628d8" volumeName="kubernetes.io/secret/9ccdc221-4ec5-487e-8ec4-85284ed628d8-metrics-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136126 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/projected/a078565a-6970-4f42-84f4-938f1d637245-kube-api-access-cxv6v" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136140 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aaadd000-4db7-4264-bfc1-b0ad63c8fb05" volumeName="kubernetes.io/projected/aaadd000-4db7-4264-bfc1-b0ad63c8fb05-kube-api-access-v4qbs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136149 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="accc57fb-75f5-4f89-9804-6ede7f77e27c" volumeName="kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-bound-sa-token" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136162 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5900a401-21c2-47f0-a921-47c648da558d" volumeName="kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136171 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" volumeName="kubernetes.io/empty-dir/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-operand-assets" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136182 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" volumeName="kubernetes.io/configmap/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136196 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43d54514-989c-4c82-93f9-153b44eacdd1" volumeName="kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-default-certificate" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136204 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" volumeName="kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136216 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee376320-9ca0-444d-ab37-9cbcb6729b11" volumeName="kubernetes.io/projected/ee376320-9ca0-444d-ab37-9cbcb6729b11-kube-api-access-25k9g" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136227 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d72e695-0183-4ee8-8add-5425e67f7138" volumeName="kubernetes.io/secret/0d72e695-0183-4ee8-8add-5425e67f7138-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136237 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" volumeName="kubernetes.io/projected/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-kube-api-access" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136247 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5900a401-21c2-47f0-a921-47c648da558d" volumeName="kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136256 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71755097-7543-48f8-8925-0e21650bf8f6" volumeName="kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-service-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136267 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8d3cf68-ed97-45b9-8c83-b42bb1f789fc" volumeName="kubernetes.io/projected/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-kube-api-access-59hld" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136276 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5900a401-21c2-47f0-a921-47c648da558d" volumeName="kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136286 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cfd2323-c33a-4d80-9c25-710920c0e605" volumeName="kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136298 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f5c64aa-676e-4e48-b714-02f6edb1d361" volumeName="kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136308 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb942756-bac7-414d-b179-cebdce588a13" volumeName="kubernetes.io/projected/bb942756-bac7-414d-b179-cebdce588a13-kube-api-access-2ktpl" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136318 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0605021-862d-424a-a4c1-037fb005b77e" volumeName="kubernetes.io/secret/d0605021-862d-424a-a4c1-037fb005b77e-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136329 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1084562a-20a0-432d-b739-90bc0a4daff2" volumeName="kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-images" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136340 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29fbc78b-1887-40d4-8165-f0f7cc40b583" volumeName="kubernetes.io/projected/29fbc78b-1887-40d4-8165-f0f7cc40b583-kube-api-access-vm2nt" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136352 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a616d-012a-479e-ab3d-b21295ea1805" volumeName="kubernetes.io/secret/6a6a616d-012a-479e-ab3d-b21295ea1805-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136364 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f266bad-8b30-4300-ad93-9d48e61f2440" volumeName="kubernetes.io/projected/6f266bad-8b30-4300-ad93-9d48e61f2440-kube-api-access-shbrj" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136372 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136385 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" volumeName="kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136394 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db376fea-5756-4bc2-9685-f32730b5a6f7" volumeName="kubernetes.io/projected/db376fea-5756-4bc2-9685-f32730b5a6f7-kube-api-access-r6qn5" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136406 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d014721-ed53-447a-b737-c496bbba18be" volumeName="kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-auth-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136416 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8641c1d1-dd79-4f1f-9343-52d1ee6faf9f" volumeName="kubernetes.io/projected/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-kube-api-access-d89r9" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136426 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce65f61f-8e3a-47d5-ac12-ad4ab05d2850" volumeName="kubernetes.io/secret/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-samples-operator-tls" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136438 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f875878f-3588-42f1-9488-750d9f4582f8" volumeName="kubernetes.io/secret/f875878f-3588-42f1-9488-750d9f4582f8-webhook-certs" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136447 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" volumeName="kubernetes.io/empty-dir/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-available-featuregates" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136459 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8126b78e-d1e4-4de7-a71d-ebc9fa0afdae" volumeName="kubernetes.io/projected/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae-kube-api-access-hww8g" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136470 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1084562a-20a0-432d-b739-90bc0a4daff2" volumeName="kubernetes.io/projected/1084562a-20a0-432d-b739-90bc0a4daff2-kube-api-access-qmsjt" seLinuxMountContext="" Mar 18 10:10:36.136193 master-0 kubenswrapper[30420]: I0318 10:10:36.136478 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1084562a-20a0-432d-b739-90bc0a4daff2" volumeName="kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.136490 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8641c1d1-dd79-4f1f-9343-52d1ee6faf9f" volumeName="kubernetes.io/secret/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.136499 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b906fc0-f2bf-4586-97e6-921bbd467b65" volumeName="kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-serving-ca" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.135745 30420 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.136545 30420 factory.go:103] Registering Raw factory Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.136590 30420 manager.go:1196] Started watching for new ooms in manager Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.137148 30420 manager.go:319] Starting recovery of all containers Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.136510 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" volumeName="kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140125 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb35841e-d992-4044-aaaa-06c9faf47bd0" volumeName="kubernetes.io/configmap/bb35841e-d992-4044-aaaa-06c9faf47bd0-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140165 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="432f611b-a1a2-4cc9-b005-17a16413d281" volumeName="kubernetes.io/projected/432f611b-a1a2-4cc9-b005-17a16413d281-kube-api-access" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140201 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" volumeName="kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140225 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a" volumeName="kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-kube-api-access-kxl7x" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140257 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d014721-ed53-447a-b737-c496bbba18be" volumeName="kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140279 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ea90fee-5b5e-4b59-bfc4-969ee8c7912e" volumeName="kubernetes.io/projected/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-kube-api-access-b46jq" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140303 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b906fc0-f2bf-4586-97e6-921bbd467b65" volumeName="kubernetes.io/projected/8b906fc0-f2bf-4586-97e6-921bbd467b65-kube-api-access-rw4s4" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140329 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140350 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="432f611b-a1a2-4cc9-b005-17a16413d281" volumeName="kubernetes.io/configmap/432f611b-a1a2-4cc9-b005-17a16413d281-service-ca" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140375 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="582d2ba8-1210-47d0-a530-0b20b2fdde22" volumeName="kubernetes.io/secret/582d2ba8-1210-47d0-a530-0b20b2fdde22-tls-certificates" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140396 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91331360-dc70-45bb-a815-e00664bae6c4" volumeName="kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140417 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa4cba67-b5d4-46c2-8cad-1a1379f764cb" volumeName="kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140444 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c7b317c-d141-4e69-9c82-4a5dda6c3248" volumeName="kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140465 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a616d-012a-479e-ab3d-b21295ea1805" volumeName="kubernetes.io/configmap/6a6a616d-012a-479e-ab3d-b21295ea1805-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140491 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="accc57fb-75f5-4f89-9804-6ede7f77e27c" volumeName="kubernetes.io/configmap/accc57fb-75f5-4f89-9804-6ede7f77e27c-trusted-ca" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140511 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9c87410-8689-4884-b5a8-df3ecbb7f1a4" volumeName="kubernetes.io/projected/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-kube-api-access-l5j9d" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140533 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb35841e-d992-4044-aaaa-06c9faf47bd0" volumeName="kubernetes.io/projected/bb35841e-d992-4044-aaaa-06c9faf47bd0-kube-api-access-zlxfz" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140560 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" volumeName="kubernetes.io/projected/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-kube-api-access-fqx6m" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140580 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2635254-a491-42e5-b598-461c24bf77ca" volumeName="kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140601 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c7b317c-d141-4e69-9c82-4a5dda6c3248" volumeName="kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-encryption-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140629 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0" volumeName="kubernetes.io/configmap/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-mcc-auth-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140650 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74476be5-669a-4737-b93b-c4870423a4da" volumeName="kubernetes.io/projected/74476be5-669a-4737-b93b-c4870423a4da-kube-api-access-nvx6m" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140676 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a078565a-6970-4f42-84f4-938f1d637245" volumeName="kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-service-ca" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140696 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa4cba67-b5d4-46c2-8cad-1a1379f764cb" volumeName="kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-client-tls" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140716 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="caec44dc-aab7-4407-b34a-52bbe4b4f635" volumeName="kubernetes.io/secret/caec44dc-aab7-4407-b34a-52bbe4b4f635-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140741 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" volumeName="kubernetes.io/secret/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140781 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c7b317c-d141-4e69-9c82-4a5dda6c3248" volumeName="kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-client" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140809 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74795f5d-dcd7-4723-8931-c34b59ce3087" volumeName="kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140871 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="accc57fb-75f5-4f89-9804-6ede7f77e27c" volumeName="kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140902 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a" volumeName="kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-ca-certs" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140929 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" volumeName="kubernetes.io/secret/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140959 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29fbc78b-1887-40d4-8165-f0f7cc40b583" volumeName="kubernetes.io/secret/29fbc78b-1887-40d4-8165-f0f7cc40b583-machine-api-operator-tls" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.140991 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43d54514-989c-4c82-93f9-153b44eacdd1" volumeName="kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-metrics-certs" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141015 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0945a421-d7c4-46df-b3d9-507443627d51" volumeName="kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-utilities" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141045 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62b82d72-d73c-451a-84e1-551d73036aa8" volumeName="kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141067 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0945a421-d7c4-46df-b3d9-507443627d51" volumeName="kubernetes.io/projected/0945a421-d7c4-46df-b3d9-507443627d51-kube-api-access-k29kr" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141105 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" volumeName="kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141134 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f5c64aa-676e-4e48-b714-02f6edb1d361" volumeName="kubernetes.io/configmap/9f5c64aa-676e-4e48-b714-02f6edb1d361-auth-proxy-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141156 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa4cba67-b5d4-46c2-8cad-1a1379f764cb" volumeName="kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141184 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a616d-012a-479e-ab3d-b21295ea1805" volumeName="kubernetes.io/projected/6a6a616d-012a-479e-ab3d-b21295ea1805-kube-api-access" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141212 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6948f93-b573-4f09-b754-aaa2269e2875" volumeName="kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-kube-api-access-t2g9q" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141236 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2635254-a491-42e5-b598-461c24bf77ca" volumeName="kubernetes.io/configmap/c2635254-a491-42e5-b598-461c24bf77ca-trusted-ca" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141264 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5e0836f-c0b4-40cd-9f63-55774da2740e" volumeName="kubernetes.io/projected/e5e0836f-c0b4-40cd-9f63-55774da2740e-kube-api-access-k94j4" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141285 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb35841e-d992-4044-aaaa-06c9faf47bd0" volumeName="kubernetes.io/secret/bb35841e-d992-4044-aaaa-06c9faf47bd0-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141313 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb942756-bac7-414d-b179-cebdce588a13" volumeName="kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141342 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f69a00b6-d908-4485-bb0d-57594fc01d24" volumeName="kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141365 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0" volumeName="kubernetes.io/secret/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-proxy-tls" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141392 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71755097-7543-48f8-8925-0e21650bf8f6" volumeName="kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-trusted-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141413 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ee99294-4785-49d0-b493-0d734cf09396" volumeName="kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141439 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0605021-862d-424a-a4c1-037fb005b77e" volumeName="kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-ovnkube-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141460 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" volumeName="kubernetes.io/projected/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-kube-api-access-lhzg4" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141479 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f875878f-3588-42f1-9488-750d9f4582f8" volumeName="kubernetes.io/projected/f875878f-3588-42f1-9488-750d9f4582f8-kube-api-access-nn7zt" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141506 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0999f781-3299-4cb6-ba76-2a4f4584c685" volumeName="kubernetes.io/secret/0999f781-3299-4cb6-ba76-2a4f4584c685-serving-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141526 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5900a401-21c2-47f0-a921-47c648da558d" volumeName="kubernetes.io/empty-dir/5900a401-21c2-47f0-a921-47c648da558d-volume-directive-shadow" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141552 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cfd2323-c33a-4d80-9c25-710920c0e605" volumeName="kubernetes.io/projected/9cfd2323-c33a-4d80-9c25-710920c0e605-kube-api-access-blfkg" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141620 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb942756-bac7-414d-b179-cebdce588a13" volumeName="kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141643 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0605021-862d-424a-a4c1-037fb005b77e" volumeName="kubernetes.io/projected/d0605021-862d-424a-a4c1-037fb005b77e-kube-api-access-cxj5c" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141668 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-service-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141730 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0999f781-3299-4cb6-ba76-2a4f4584c685" volumeName="kubernetes.io/configmap/0999f781-3299-4cb6-ba76-2a4f4584c685-config" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141761 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" volumeName="kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141790 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdf80ddc-7c99-4f60-814b-ba98809ef41d" volumeName="kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-apiservice-cert" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141818 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" volumeName="kubernetes.io/projected/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-kube-api-access-wj9sq" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141891 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03de1ea6-da57-4e13-8e5a-d5e10a9f9957" volumeName="kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cni-binary-copy" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.141926 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29fbc78b-1887-40d4-8165-f0f7cc40b583" volumeName="kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-images" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.142012 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62b82d72-d73c-451a-84e1-551d73036aa8" volumeName="kubernetes.io/projected/62b82d72-d73c-451a-84e1-551d73036aa8-kube-api-access-lvnrf" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.142097 30420 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f076eaf0-b041-4db0-ba06-3d85e23bb654" volumeName="kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-trusted-ca-bundle" seLinuxMountContext="" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.142129 30420 reconstruct.go:97] "Volume reconstruction finished" Mar 18 10:10:36.147097 master-0 kubenswrapper[30420]: I0318 10:10:36.142198 30420 reconciler.go:26] "Reconciler: start to sync state" Mar 18 10:10:36.149884 master-0 kubenswrapper[30420]: E0318 10:10:36.149255 30420 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 10:10:36.160737 master-0 kubenswrapper[30420]: I0318 10:10:36.160680 30420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 10:10:36.165907 master-0 kubenswrapper[30420]: I0318 10:10:36.165815 30420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 10:10:36.165907 master-0 kubenswrapper[30420]: I0318 10:10:36.165905 30420 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 10:10:36.166081 master-0 kubenswrapper[30420]: I0318 10:10:36.165928 30420 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 10:10:36.166081 master-0 kubenswrapper[30420]: E0318 10:10:36.165984 30420 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 10:10:36.184484 master-0 kubenswrapper[30420]: I0318 10:10:36.184410 30420 generic.go:334] "Generic (PLEG): container finished" podID="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" containerID="0e2eb9f88477dff52f2e8f12bdb93c5b6461b1901f2eeb98ccf29a08010685ef" exitCode=0 Mar 18 10:10:36.184484 master-0 kubenswrapper[30420]: I0318 10:10:36.184467 30420 generic.go:334] "Generic (PLEG): container finished" podID="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" containerID="c8f206ca8c94fc19bfa804f2e3458858b441e4df0a8873ee86942ce37a6e1dff" exitCode=0 Mar 18 10:10:36.184484 master-0 kubenswrapper[30420]: I0318 10:10:36.184481 30420 generic.go:334] "Generic (PLEG): container finished" podID="d26036f1-bdce-4ec5-873f-962fa7e8e6c1" containerID="ded65abc153650de9d5b3f05283a7442214a212644c7845fac73ca03c4499d84" exitCode=0 Mar 18 10:10:36.186916 master-0 kubenswrapper[30420]: I0318 10:10:36.186884 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7fl4x_bb942756-bac7-414d-b179-cebdce588a13/approver/1.log" Mar 18 10:10:36.187238 master-0 kubenswrapper[30420]: I0318 10:10:36.187214 30420 generic.go:334] "Generic (PLEG): container finished" podID="bb942756-bac7-414d-b179-cebdce588a13" containerID="8009f4f9bf68efb70bfa7b66731f5e2be25cbb5d97d4aeafc6a4a27c0d88d49e" exitCode=1 Mar 18 10:10:36.188890 master-0 kubenswrapper[30420]: I0318 10:10:36.188870 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/2.log" Mar 18 10:10:36.189312 master-0 kubenswrapper[30420]: I0318 10:10:36.189255 30420 generic.go:334] "Generic (PLEG): container finished" podID="1084562a-20a0-432d-b739-90bc0a4daff2" containerID="c0b6e3b46ac87b79d91e8ba9d05e392b0a7e135e1b0676e08c471b66babdb7f6" exitCode=1 Mar 18 10:10:36.191857 master-0 kubenswrapper[30420]: I0318 10:10:36.191228 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vj8tt_3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/kube-scheduler-operator-container/1.log" Mar 18 10:10:36.191857 master-0 kubenswrapper[30420]: I0318 10:10:36.191268 30420 generic.go:334] "Generic (PLEG): container finished" podID="3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6" containerID="ece038fe79c27be1029079683dfa33a1fa90e9515d0fac47aae2ee51f3ffd2df" exitCode=255 Mar 18 10:10:36.196517 master-0 kubenswrapper[30420]: I0318 10:10:36.196469 30420 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289" exitCode=0 Mar 18 10:10:36.199732 master-0 kubenswrapper[30420]: I0318 10:10:36.199689 30420 generic.go:334] "Generic (PLEG): container finished" podID="bb35841e-d992-4044-aaaa-06c9faf47bd0" containerID="d49c249df3f862614187a3b820449471cb0684b53fb2bc542b281bed1f3be2fd" exitCode=0 Mar 18 10:10:36.201695 master-0 kubenswrapper[30420]: I0318 10:10:36.201654 30420 generic.go:334] "Generic (PLEG): container finished" podID="8e812dd9-cd05-4e9e-8710-d0920181ece2" containerID="0f3ba17641fd2eeb6aa8e7525f8b6f8d95a3be2ff7d2acad4eb9670c5982bbeb" exitCode=0 Mar 18 10:10:36.204312 master-0 kubenswrapper[30420]: I0318 10:10:36.204281 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-nq7mw_0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/manager/0.log" Mar 18 10:10:36.204608 master-0 kubenswrapper[30420]: I0318 10:10:36.204583 30420 generic.go:334] "Generic (PLEG): container finished" podID="0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a" containerID="89f9d8c31d719734af3431b3cec84aa03bf298440dd062c3328c469e4d1b49bb" exitCode=1 Mar 18 10:10:36.208675 master-0 kubenswrapper[30420]: I0318 10:10:36.208646 30420 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="4399c846d156fc9ec273e7482a7df69bd6d7ebd35bceea9ea824c44fc0dbb98b" exitCode=0 Mar 18 10:10:36.208772 master-0 kubenswrapper[30420]: I0318 10:10:36.208759 30420 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="c51a160bfa16a28b74f81d311f303e209d7ed9b37be27ca1db9e534e7071f1af" exitCode=0 Mar 18 10:10:36.208850 master-0 kubenswrapper[30420]: I0318 10:10:36.208827 30420 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="ec0a4a4a27c5788cf435e3f981e3abe7cd525b4f9b545a25440129af48eb261e" exitCode=0 Mar 18 10:10:36.210399 master-0 kubenswrapper[30420]: E0318 10:10:36.210373 30420 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 10:10:36.210842 master-0 kubenswrapper[30420]: I0318 10:10:36.210811 30420 generic.go:334] "Generic (PLEG): container finished" podID="db376fea-5756-4bc2-9685-f32730b5a6f7" containerID="8a4454e2a9f9cbf1f5dc18fe41a00327026fa7988233c2ea2c84ec074c1b0faf" exitCode=0 Mar 18 10:10:36.210944 master-0 kubenswrapper[30420]: I0318 10:10:36.210931 30420 generic.go:334] "Generic (PLEG): container finished" podID="db376fea-5756-4bc2-9685-f32730b5a6f7" containerID="3895b0bbebe711b5e51fd8fde77e2f404e00d676164e6f589e15a4b9e9bdc150" exitCode=0 Mar 18 10:10:36.216033 master-0 kubenswrapper[30420]: I0318 10:10:36.215989 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-s7rm6_c2635254-a491-42e5-b598-461c24bf77ca/cluster-node-tuning-operator/0.log" Mar 18 10:10:36.216286 master-0 kubenswrapper[30420]: I0318 10:10:36.216036 30420 generic.go:334] "Generic (PLEG): container finished" podID="c2635254-a491-42e5-b598-461c24bf77ca" containerID="c59a5fbf874d40b4d6dbdabc263d54ba8033378f9b3eccda436cb84f154d827b" exitCode=1 Mar 18 10:10:36.221067 master-0 kubenswrapper[30420]: I0318 10:10:36.221027 30420 generic.go:334] "Generic (PLEG): container finished" podID="1cb8ab19-0564-4182-a7e3-0943c1480663" containerID="56303ad5942aabce8c0f739f5e78ec830c4f13ce66a281475244962d17c4dbb4" exitCode=0 Mar 18 10:10:36.224613 master-0 kubenswrapper[30420]: I0318 10:10:36.224580 30420 generic.go:334] "Generic (PLEG): container finished" podID="6f266bad-8b30-4300-ad93-9d48e61f2440" containerID="fb1e06109c9333d787d8e6b957a55759794e573da59639d9f2a8746b35212fab" exitCode=0 Mar 18 10:10:36.238146 master-0 kubenswrapper[30420]: I0318 10:10:36.238076 30420 generic.go:334] "Generic (PLEG): container finished" podID="43d54514-989c-4c82-93f9-153b44eacdd1" containerID="027c606848ee1832749ed6e321be439a9482e3f79b6245a43fee2d25af9358b6" exitCode=0 Mar 18 10:10:36.240680 master-0 kubenswrapper[30420]: I0318 10:10:36.240649 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_af8e875368eec13e995ea08015e08c42/kube-controller-manager-cert-syncer/0.log" Mar 18 10:10:36.240680 master-0 kubenswrapper[30420]: I0318 10:10:36.240677 30420 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="7ca73c96270bb01e4b2a501f5fca8a82d6d3109e114172103ea987822829d77c" exitCode=0 Mar 18 10:10:36.240807 master-0 kubenswrapper[30420]: I0318 10:10:36.240690 30420 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="fce78d10ab44ad6e3870abc2e19feeb6f5ae7acb96a08b13653663840e0cbb1b" exitCode=0 Mar 18 10:10:36.240807 master-0 kubenswrapper[30420]: I0318 10:10:36.240697 30420 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="eeb871e8e559b9fd82b985e8a38853c6cc1a0962899e9d61d0017f002e610d41" exitCode=0 Mar 18 10:10:36.240807 master-0 kubenswrapper[30420]: I0318 10:10:36.240705 30420 generic.go:334] "Generic (PLEG): container finished" podID="af8e875368eec13e995ea08015e08c42" containerID="8a062b1b85a12fd918c3c62a85847e5a60612517f0ee750aabe64bd125668daf" exitCode=2 Mar 18 10:10:36.242375 master-0 kubenswrapper[30420]: I0318 10:10:36.242327 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-95jvh_1ad4aa30-f7d5-47ca-b01e-2643f7195685/machine-approver-controller/0.log" Mar 18 10:10:36.243065 master-0 kubenswrapper[30420]: I0318 10:10:36.243029 30420 generic.go:334] "Generic (PLEG): container finished" podID="1ad4aa30-f7d5-47ca-b01e-2643f7195685" containerID="989ed9d1224874eccaf2482bae9307a2390fd6b1f5f7b0d51c60b2a5d20c283b" exitCode=255 Mar 18 10:10:36.245678 master-0 kubenswrapper[30420]: I0318 10:10:36.245641 30420 generic.go:334] "Generic (PLEG): container finished" podID="1c62ceda-5e7e-4392-83b9-0d80856e1a96" containerID="64fd17a4dc869dbbdd2a4f39ac14053290f921c096dddb0c79f7bc300e3e1965" exitCode=0 Mar 18 10:10:36.248252 master-0 kubenswrapper[30420]: I0318 10:10:36.248221 30420 generic.go:334] "Generic (PLEG): container finished" podID="1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7" containerID="21100e562902d6efca61425bd34ddb104507d8d781f4e3a980d72c66d6282ba6" exitCode=0 Mar 18 10:10:36.248252 master-0 kubenswrapper[30420]: I0318 10:10:36.248239 30420 generic.go:334] "Generic (PLEG): container finished" podID="1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7" containerID="d8336fe95d751b483d2ff986081042be8fc84379e88cfb3baaea2d45717c14ee" exitCode=0 Mar 18 10:10:36.250198 master-0 kubenswrapper[30420]: I0318 10:10:36.250165 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/4.log" Mar 18 10:10:36.250531 master-0 kubenswrapper[30420]: I0318 10:10:36.250499 30420 generic.go:334] "Generic (PLEG): container finished" podID="accc57fb-75f5-4f89-9804-6ede7f77e27c" containerID="ced2dd809e0469dcbc3622bf167909cd0814985fc6dca12aa44de7553e61867c" exitCode=1 Mar 18 10:10:36.267275 master-0 kubenswrapper[30420]: E0318 10:10:36.266211 30420 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 10:10:36.279949 master-0 kubenswrapper[30420]: I0318 10:10:36.268031 30420 generic.go:334] "Generic (PLEG): container finished" podID="9ccdc221-4ec5-487e-8ec4-85284ed628d8" containerID="d104795039a77eee9eb4fddfb0911cce88afaee884dd9159c6ea0d77b9f36476" exitCode=0 Mar 18 10:10:36.279949 master-0 kubenswrapper[30420]: I0318 10:10:36.271256 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_449dc8b3-72b7-4be5-b5ab-ed4d632f52b2/installer/0.log" Mar 18 10:10:36.279949 master-0 kubenswrapper[30420]: I0318 10:10:36.271282 30420 generic.go:334] "Generic (PLEG): container finished" podID="449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" containerID="01fbb9d5ae86373a51c41f3a5e60d86ed2cd0a315f2ae635082fa660578bf765" exitCode=1 Mar 18 10:10:36.279949 master-0 kubenswrapper[30420]: I0318 10:10:36.277843 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/3.log" Mar 18 10:10:36.279949 master-0 kubenswrapper[30420]: I0318 10:10:36.277895 30420 generic.go:334] "Generic (PLEG): container finished" podID="932a70df-3afe-4873-9449-ab6e061d3fe3" containerID="a4a231c549055fa855added61a1a04bcb99c420a8c29b8d952b99e6ee3109585" exitCode=1 Mar 18 10:10:36.280331 master-0 kubenswrapper[30420]: I0318 10:10:36.280117 30420 generic.go:334] "Generic (PLEG): container finished" podID="3646e0cd-49c9-4a98-a2e3-efe9359cc6c4" containerID="69f2cdbc33296c63e514edbad7b73c69b46a3bfd3f3df3701dfc360a76760a09" exitCode=0 Mar 18 10:10:36.285094 master-0 kubenswrapper[30420]: I0318 10:10:36.285045 30420 generic.go:334] "Generic (PLEG): container finished" podID="a078565a-6970-4f42-84f4-938f1d637245" containerID="53e820dc65799d326622907d56bfabcb65416af56a015afddd831825233f23fe" exitCode=0 Mar 18 10:10:36.291987 master-0 kubenswrapper[30420]: I0318 10:10:36.291950 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-495pg_0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/openshift-config-operator/3.log" Mar 18 10:10:36.292645 master-0 kubenswrapper[30420]: I0318 10:10:36.292607 30420 generic.go:334] "Generic (PLEG): container finished" podID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerID="6ed6678817d1dbeb82e03a25e183f0798cbf1dafc08404b095ad2e689d372212" exitCode=255 Mar 18 10:10:36.292645 master-0 kubenswrapper[30420]: I0318 10:10:36.292632 30420 generic.go:334] "Generic (PLEG): container finished" podID="0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480" containerID="981f5359f2b3c5ba98385487e0fffb3f9c331fb34bb0e106e475367f63bb51f9" exitCode=0 Mar 18 10:10:36.296348 master-0 kubenswrapper[30420]: I0318 10:10:36.296306 30420 generic.go:334] "Generic (PLEG): container finished" podID="0999f781-3299-4cb6-ba76-2a4f4584c685" containerID="bdf23e456932d75fae6cdcf4a2bdaca513da90b17853bb40022bebbd243e87d8" exitCode=0 Mar 18 10:10:36.298890 master-0 kubenswrapper[30420]: I0318 10:10:36.298781 30420 generic.go:334] "Generic (PLEG): container finished" podID="b9c87410-8689-4884-b5a8-df3ecbb7f1a4" containerID="e449b47779a9d7dba0806705cf39954c432c7970c3371ed0b172d5bc7722060d" exitCode=0 Mar 18 10:10:36.298890 master-0 kubenswrapper[30420]: I0318 10:10:36.298881 30420 generic.go:334] "Generic (PLEG): container finished" podID="b9c87410-8689-4884-b5a8-df3ecbb7f1a4" containerID="6e2ac2ef1c2d040695f9086d50b707203dabf820029ae8a9e577f8116338d92f" exitCode=0 Mar 18 10:10:36.306551 master-0 kubenswrapper[30420]: I0318 10:10:36.306497 30420 generic.go:334] "Generic (PLEG): container finished" podID="be8bd84c-8035-4bec-a725-b0ae89382c0f" containerID="acbbc72042bd93d1606b83c55c35f1b48dc5dce61f6ad5d66183b045a74dff9a" exitCode=0 Mar 18 10:10:36.310500 master-0 kubenswrapper[30420]: E0318 10:10:36.310461 30420 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 10:10:36.310687 master-0 kubenswrapper[30420]: I0318 10:10:36.310552 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler-cert-syncer/0.log" Mar 18 10:10:36.311284 master-0 kubenswrapper[30420]: I0318 10:10:36.311244 30420 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="0f4bf1dfc4a190fd3410aa065645689966e325eb73cf7788b53ae0a9bf57f3cc" exitCode=0 Mar 18 10:10:36.311284 master-0 kubenswrapper[30420]: I0318 10:10:36.311279 30420 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="504c7c58af279fedab2f56000cc691abf8096faa6bf0c02f961583e20a138ed6" exitCode=0 Mar 18 10:10:36.311399 master-0 kubenswrapper[30420]: I0318 10:10:36.311289 30420 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="e73e9ab6250891a74742cf894dfa6d6f12c07f81c7c6e29abf71445a93b042c6" exitCode=2 Mar 18 10:10:36.311399 master-0 kubenswrapper[30420]: I0318 10:10:36.311297 30420 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="3e2c362efe2fe8c48b78a8150b0e9484398aa97bf0cb69d78e0777b3495062fc" exitCode=0 Mar 18 10:10:36.313009 master-0 kubenswrapper[30420]: I0318 10:10:36.312951 30420 generic.go:334] "Generic (PLEG): container finished" podID="a6716938-ca14-4000-b7f1-b60e93e93c0d" containerID="07f18c8da1828af97eeefd0d942acb995fabaae660b2da8d651807992de76bb4" exitCode=0 Mar 18 10:10:36.315088 master-0 kubenswrapper[30420]: I0318 10:10:36.315043 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_5fb70bf3-93cd-4000-be1a-8e21846d5709/installer/0.log" Mar 18 10:10:36.315171 master-0 kubenswrapper[30420]: I0318 10:10:36.315088 30420 generic.go:334] "Generic (PLEG): container finished" podID="5fb70bf3-93cd-4000-be1a-8e21846d5709" containerID="22a0f37f7177929cbf4f5043d36e78b2ea4f84b8562060ced4185a407eb57943" exitCode=1 Mar 18 10:10:36.322692 master-0 kubenswrapper[30420]: I0318 10:10:36.322645 30420 generic.go:334] "Generic (PLEG): container finished" podID="0d72e695-0183-4ee8-8add-5425e67f7138" containerID="7d6fd2e1bc4be1b2a613ed03b0fa77f5671b8e216ea0aab842b063aa213fff8f" exitCode=0 Mar 18 10:10:36.325990 master-0 kubenswrapper[30420]: I0318 10:10:36.325935 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-zcm5j_f88c2a18-11f5-45ef-aff1-3c5976716d85/control-plane-machine-set-operator/0.log" Mar 18 10:10:36.326140 master-0 kubenswrapper[30420]: I0318 10:10:36.325993 30420 generic.go:334] "Generic (PLEG): container finished" podID="f88c2a18-11f5-45ef-aff1-3c5976716d85" containerID="d77d62684d3696a69a4baad8521b7beec7ec234f5d636741ff18bfd6906b5683" exitCode=1 Mar 18 10:10:36.328235 master-0 kubenswrapper[30420]: I0318 10:10:36.328191 30420 generic.go:334] "Generic (PLEG): container finished" podID="87a8662e-66f1-4aee-9344-564bb4ac4f9a" containerID="9741863ef9844fe110fec368fe8e35a337bceb7feefcd7589421d83a4b33ff81" exitCode=0 Mar 18 10:10:36.332746 master-0 kubenswrapper[30420]: I0318 10:10:36.332647 30420 generic.go:334] "Generic (PLEG): container finished" podID="0945a421-d7c4-46df-b3d9-507443627d51" containerID="1eff62cc27e434fd50cb63f04471e39fb7819f214071bd5d5eb17564061f1baa" exitCode=0 Mar 18 10:10:36.332746 master-0 kubenswrapper[30420]: I0318 10:10:36.332713 30420 generic.go:334] "Generic (PLEG): container finished" podID="0945a421-d7c4-46df-b3d9-507443627d51" containerID="8f448cb12e0cc4fb34d60ad284a20b2c9aca8ec622e43fb96e75a5f038812980" exitCode=0 Mar 18 10:10:36.348159 master-0 kubenswrapper[30420]: I0318 10:10:36.345919 30420 generic.go:334] "Generic (PLEG): container finished" podID="9d02e790-b9d0-4e2d-a97d-ec2eaf720f28" containerID="273c8765db6facd550b6e56f450546d9b1b71f8e90628bc1352e6d3fe67f7a08" exitCode=0 Mar 18 10:10:36.351686 master-0 kubenswrapper[30420]: I0318 10:10:36.350055 30420 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="0538eb942c1197a086b3273af768571780d6d5af303141476810f1cd7daec3cc" exitCode=0 Mar 18 10:10:36.351686 master-0 kubenswrapper[30420]: I0318 10:10:36.350088 30420 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="f03028f16df79cfb2d65134dc28295edb8b443255b855706b86769e87e1604c6" exitCode=0 Mar 18 10:10:36.351686 master-0 kubenswrapper[30420]: I0318 10:10:36.350102 30420 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="8de3d5cda49c071629c169597f57fc4a39ffa0565faf4afa9da96f88d8b22b28" exitCode=0 Mar 18 10:10:36.351686 master-0 kubenswrapper[30420]: I0318 10:10:36.350109 30420 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="ba4b50efa1c5a3ef4b380af81a12c8288cb0cec49cd61d28198db983936b1f94" exitCode=0 Mar 18 10:10:36.351686 master-0 kubenswrapper[30420]: I0318 10:10:36.350117 30420 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="160626554dc940cedbe7ec0ddb596f31e480d63196f634936e05702f85c45819" exitCode=0 Mar 18 10:10:36.351686 master-0 kubenswrapper[30420]: I0318 10:10:36.350125 30420 generic.go:334] "Generic (PLEG): container finished" podID="91331360-dc70-45bb-a815-e00664bae6c4" containerID="8ef686cc40f68aff82f23ce87e06ff13fba380e3cd6b61b827160c9e73c4cbbc" exitCode=0 Mar 18 10:10:36.352498 master-0 kubenswrapper[30420]: I0318 10:10:36.352403 30420 generic.go:334] "Generic (PLEG): container finished" podID="432f611b-a1a2-4cc9-b005-17a16413d281" containerID="fd996d8153064578e39564038db6d922a85643610cafc41bae9a4fe71acf8389" exitCode=0 Mar 18 10:10:36.358878 master-0 kubenswrapper[30420]: I0318 10:10:36.357140 30420 generic.go:334] "Generic (PLEG): container finished" podID="2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0" containerID="c8c319ddb107c3bc56c6d9fe6eeed7e7744a57b20e36ccaa20a733dd325d4c8f" exitCode=0 Mar 18 10:10:36.386867 master-0 kubenswrapper[30420]: I0318 10:10:36.373188 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xgdvw_03de1ea6-da57-4e13-8e5a-d5e10a9f9957/kube-multus/0.log" Mar 18 10:10:36.386867 master-0 kubenswrapper[30420]: I0318 10:10:36.373252 30420 generic.go:334] "Generic (PLEG): container finished" podID="03de1ea6-da57-4e13-8e5a-d5e10a9f9957" containerID="2da220e2852846e9b471d19bf3329629d81b1d881746691dfdddb60fd750adba" exitCode=1 Mar 18 10:10:36.386867 master-0 kubenswrapper[30420]: I0318 10:10:36.379342 30420 generic.go:334] "Generic (PLEG): container finished" podID="a3657106-1eea-4031-8c92-85ba6287b425" containerID="06c0be19470a9053df1e868da4f3dfc9b3f3db58cf48affc02d1dbbb79a51995" exitCode=0 Mar 18 10:10:36.386867 master-0 kubenswrapper[30420]: I0318 10:10:36.383080 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-mw9tt_9f5c64aa-676e-4e48-b714-02f6edb1d361/cluster-autoscaler-operator/0.log" Mar 18 10:10:36.386867 master-0 kubenswrapper[30420]: I0318 10:10:36.384113 30420 generic.go:334] "Generic (PLEG): container finished" podID="9f5c64aa-676e-4e48-b714-02f6edb1d361" containerID="6655987065a30c5bbf651bf96600d36185c30b2a671ea89757e4e505e5002a5d" exitCode=255 Mar 18 10:10:36.386867 master-0 kubenswrapper[30420]: I0318 10:10:36.386411 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-r8fkv_d4d2218c-f9df-4d43-8727-ed3a920e23f7/package-server-manager/0.log" Mar 18 10:10:36.386867 master-0 kubenswrapper[30420]: I0318 10:10:36.386731 30420 generic.go:334] "Generic (PLEG): container finished" podID="d4d2218c-f9df-4d43-8727-ed3a920e23f7" containerID="2ad786c56f6dcaf1e2cffec16812c116ea52e84ada296839ebfedd3ef5e41741" exitCode=1 Mar 18 10:10:36.396560 master-0 kubenswrapper[30420]: I0318 10:10:36.390802 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-6-master-0_4ea5939e-5f4d-4028-9384-2ec5710ecdc8/installer/0.log" Mar 18 10:10:36.396560 master-0 kubenswrapper[30420]: I0318 10:10:36.390858 30420 generic.go:334] "Generic (PLEG): container finished" podID="4ea5939e-5f4d-4028-9384-2ec5710ecdc8" containerID="ee0f38924448efddd8bd62aa03fafbac2abe2ddc36be4b5eb348dac27bee7be4" exitCode=1 Mar 18 10:10:36.396560 master-0 kubenswrapper[30420]: I0318 10:10:36.392851 30420 generic.go:334] "Generic (PLEG): container finished" podID="8b906fc0-f2bf-4586-97e6-921bbd467b65" containerID="ca2bd4c098fa7a5b008bdac56aadab357bb0951ab5e2ff2f404990c8c28ed3a8" exitCode=0 Mar 18 10:10:36.396560 master-0 kubenswrapper[30420]: I0318 10:10:36.395167 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/config-sync-controllers/0.log" Mar 18 10:10:36.396560 master-0 kubenswrapper[30420]: I0318 10:10:36.395574 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/cluster-cloud-controller-manager/0.log" Mar 18 10:10:36.396560 master-0 kubenswrapper[30420]: I0318 10:10:36.395602 30420 generic.go:334] "Generic (PLEG): container finished" podID="8641c1d1-dd79-4f1f-9343-52d1ee6faf9f" containerID="c9db2465522a9f31bfdb29b4350bcd424f2fa2f288ceeee292a0e5256f8ed40d" exitCode=1 Mar 18 10:10:36.396560 master-0 kubenswrapper[30420]: I0318 10:10:36.395613 30420 generic.go:334] "Generic (PLEG): container finished" podID="8641c1d1-dd79-4f1f-9343-52d1ee6faf9f" containerID="592ca06fab8bb0c93dfd3465f07a7c645bf00008deb42f76b6d5198afd1f495a" exitCode=1 Mar 18 10:10:36.411091 master-0 kubenswrapper[30420]: I0318 10:10:36.411048 30420 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="f94e501b0ad12236c03bc538f983952a18a8058deb0777210379742bce193fde" exitCode=0 Mar 18 10:10:36.411292 master-0 kubenswrapper[30420]: I0318 10:10:36.411270 30420 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="5a898e220fc5eed6a4a32559913535749eb16cc2a7cd17e978e4c62aa7e6452a" exitCode=0 Mar 18 10:10:36.411413 master-0 kubenswrapper[30420]: I0318 10:10:36.411395 30420 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="0e1b90509e26fef960c00500d9ad97c317d8639e8d0264437904c7c3c438399a" exitCode=0 Mar 18 10:10:36.411604 master-0 kubenswrapper[30420]: E0318 10:10:36.411569 30420 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 10:10:36.422875 master-0 kubenswrapper[30420]: I0318 10:10:36.422805 30420 generic.go:334] "Generic (PLEG): container finished" podID="0c7b317c-d141-4e69-9c82-4a5dda6c3248" containerID="2f45eb55b88d94206ed5a68b6e7edfd43cd25729bac030b2a8ee190f8b3e4b8f" exitCode=0 Mar 18 10:10:36.430070 master-0 kubenswrapper[30420]: I0318 10:10:36.429414 30420 generic.go:334] "Generic (PLEG): container finished" podID="8ee99294-4785-49d0-b493-0d734cf09396" containerID="9f8d2fc41a698996d2e8d108e6acdc91bab1b3eba85194b567c7b7ad7a300279" exitCode=0 Mar 18 10:10:36.431530 master-0 kubenswrapper[30420]: I0318 10:10:36.431494 30420 generic.go:334] "Generic (PLEG): container finished" podID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerID="ef56f38c2bc505e5fbc078e115510767e1b06d3c1193709a420591be902fdca8" exitCode=0 Mar 18 10:10:36.435672 master-0 kubenswrapper[30420]: I0318 10:10:36.435622 30420 generic.go:334] "Generic (PLEG): container finished" podID="29490aed-9c97-42d1-94c8-44d1de13b70c" containerID="7dacdb62f1945b9bcbdc5ee51170fb7ad65d9a415432a7a5c1a8a53dc9179ca2" exitCode=0 Mar 18 10:10:36.442487 master-0 kubenswrapper[30420]: I0318 10:10:36.442343 30420 generic.go:334] "Generic (PLEG): container finished" podID="a4d7edd6-7975-468e-adea-138d92ed1be1" containerID="3a3c8396e15ffcccb1d7182e3eb6dbd5c5cf86adc58a45d80d2016b54dbad828" exitCode=0 Mar 18 10:10:36.448087 master-0 kubenswrapper[30420]: I0318 10:10:36.448033 30420 generic.go:334] "Generic (PLEG): container finished" podID="d0605021-862d-424a-a4c1-037fb005b77e" containerID="eb346301fe01e98fabdb59a67db563268a1e2d2d2c9e4e2f98ed640abf5fcf03" exitCode=0 Mar 18 10:10:36.450390 master-0 kubenswrapper[30420]: I0318 10:10:36.450361 30420 generic.go:334] "Generic (PLEG): container finished" podID="6a6a616d-012a-479e-ab3d-b21295ea1805" containerID="1438e5c0b41d2a2cdef9ebed19bce07d60cb299edfd66da1254cb9b0f6f74353" exitCode=0 Mar 18 10:10:36.452837 master-0 kubenswrapper[30420]: I0318 10:10:36.452792 30420 generic.go:334] "Generic (PLEG): container finished" podID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerID="6959115a6f11e9fd2881ca4214b94da71213aad3f3ef00ebec36ed62d0816399" exitCode=0 Mar 18 10:10:36.454450 master-0 kubenswrapper[30420]: I0318 10:10:36.454406 30420 generic.go:334] "Generic (PLEG): container finished" podID="ec53d7fa-445b-4e1d-84ef-545f08e80ccc" containerID="ab9a533206bf10cbc0086475add5139b53093ab44226d73893369fd1ba1ed0a0" exitCode=0 Mar 18 10:10:36.456668 master-0 kubenswrapper[30420]: I0318 10:10:36.456631 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-77n8q_b6948f93-b573-4f09-b754-aaa2269e2875/manager/0.log" Mar 18 10:10:36.456799 master-0 kubenswrapper[30420]: I0318 10:10:36.456781 30420 generic.go:334] "Generic (PLEG): container finished" podID="b6948f93-b573-4f09-b754-aaa2269e2875" containerID="7a73a7304ad52748de231e8de0dd60f0f62a95ba031328669ed0ac946a01de35" exitCode=1 Mar 18 10:10:36.459385 master-0 kubenswrapper[30420]: I0318 10:10:36.459358 30420 generic.go:334] "Generic (PLEG): container finished" podID="f076eaf0-b041-4db0-ba06-3d85e23bb654" containerID="b5df01736cfc47aa85b36fd7020d93ab1a10c4989f7408f5d6725b96384201c0" exitCode=0 Mar 18 10:10:36.466352 master-0 kubenswrapper[30420]: E0318 10:10:36.466309 30420 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 10:10:36.476993 master-0 kubenswrapper[30420]: I0318 10:10:36.476945 30420 generic.go:334] "Generic (PLEG): container finished" podID="2d014721-ed53-447a-b737-c496bbba18be" containerID="09180a6a9fee68a97b5503198f4ae1ab6d84235d2b7270501ebf779151b55941" exitCode=0 Mar 18 10:10:36.502850 master-0 kubenswrapper[30420]: I0318 10:10:36.496411 30420 generic.go:334] "Generic (PLEG): container finished" podID="346d6f79-a9bd-4097-abe7-b68830aa2e84" containerID="974b6ae008035f16bd3f106b986b5975e658b69a9a1e106bd2d280e49e6fba6d" exitCode=0 Mar 18 10:10:36.513888 master-0 kubenswrapper[30420]: E0318 10:10:36.513311 30420 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 10:10:36.521241 master-0 kubenswrapper[30420]: I0318 10:10:36.520899 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_90db95c5-2017-4b04-b11c-9844947c5be9/installer/0.log" Mar 18 10:10:36.521241 master-0 kubenswrapper[30420]: I0318 10:10:36.520974 30420 generic.go:334] "Generic (PLEG): container finished" podID="90db95c5-2017-4b04-b11c-9844947c5be9" containerID="84fe69ce9654e0f778c53fad94cc55da3a405c4d3f78319e40a6e7f4b1d02966" exitCode=1 Mar 18 10:10:36.542630 master-0 kubenswrapper[30420]: I0318 10:10:36.537150 30420 generic.go:334] "Generic (PLEG): container finished" podID="54a208d1-afe8-49b5-92e0-e27afb4abc80" containerID="d65f913e3d46ba5408795bb9c468d0294b6c4c00a07a18a41204ec7233a6d96b" exitCode=0 Mar 18 10:10:36.562906 master-0 kubenswrapper[30420]: I0318 10:10:36.560759 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-xnvn9_29fbc78b-1887-40d4-8165-f0f7cc40b583/machine-api-operator/0.log" Mar 18 10:10:36.562906 master-0 kubenswrapper[30420]: I0318 10:10:36.561840 30420 generic.go:334] "Generic (PLEG): container finished" podID="29fbc78b-1887-40d4-8165-f0f7cc40b583" containerID="8bc81d8dfdc71ea2b5b45a9af5008e6292938bf340e41102f31bdd98b3d93eaa" exitCode=255 Mar 18 10:10:36.594227 master-0 kubenswrapper[30420]: I0318 10:10:36.594126 30420 generic.go:334] "Generic (PLEG): container finished" podID="11a2f93448b9d54da9854663936e2b73" containerID="dbf2586f3189d0b8f9dc638d92901a45e6cf3cdbf23cf4bd198e6fe898ec14b2" exitCode=0 Mar 18 10:10:36.621953 master-0 kubenswrapper[30420]: I0318 10:10:36.615558 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 10:10:36.621953 master-0 kubenswrapper[30420]: E0318 10:10:36.615644 30420 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 10:10:36.621953 master-0 kubenswrapper[30420]: I0318 10:10:36.615915 30420 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b" exitCode=1 Mar 18 10:10:36.621953 master-0 kubenswrapper[30420]: I0318 10:10:36.615933 30420 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="a88536111853576d542216418fa9e6a7c0a796244d77dbfb3568461d1ad235ad" exitCode=0 Mar 18 10:10:36.621953 master-0 kubenswrapper[30420]: I0318 10:10:36.619019 30420 generic.go:334] "Generic (PLEG): container finished" podID="fcf01f63-ed66-4f0d-b2df-97c77bbf8543" containerID="cd5460a46f1af5014f09f3d74c852c3c8e1dbae9dbdc5909c502350cb309005a" exitCode=0 Mar 18 10:10:36.621953 master-0 kubenswrapper[30420]: I0318 10:10:36.621697 30420 generic.go:334] "Generic (PLEG): container finished" podID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerID="d957302f7adb981277fbf539c8fb8ba8b510cdf036ae3b42bb11275306e467ec" exitCode=0 Mar 18 10:10:36.626528 master-0 kubenswrapper[30420]: I0318 10:10:36.625832 30420 generic.go:334] "Generic (PLEG): container finished" podID="5ea90fee-5b5e-4b59-bfc4-969ee8c7912e" containerID="ba2a4b371f548813e64e9936bac5f8a30427b5b6c9ba22e587be7235d007fdc6" exitCode=0 Mar 18 10:10:36.733147 master-0 kubenswrapper[30420]: E0318 10:10:36.733091 30420 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 10:10:36.835967 master-0 kubenswrapper[30420]: E0318 10:10:36.834707 30420 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 10:10:36.842177 master-0 kubenswrapper[30420]: I0318 10:10:36.842150 30420 manager.go:324] Recovery completed Mar 18 10:10:36.866476 master-0 kubenswrapper[30420]: E0318 10:10:36.866392 30420 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 10:10:36.930943 master-0 kubenswrapper[30420]: I0318 10:10:36.930809 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:36.933393 master-0 kubenswrapper[30420]: I0318 10:10:36.933347 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:36.933393 master-0 kubenswrapper[30420]: I0318 10:10:36.933395 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:36.933526 master-0 kubenswrapper[30420]: I0318 10:10:36.933404 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:36.934979 master-0 kubenswrapper[30420]: E0318 10:10:36.934936 30420 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 10:10:36.963631 master-0 kubenswrapper[30420]: I0318 10:10:36.963590 30420 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 10:10:36.963631 master-0 kubenswrapper[30420]: I0318 10:10:36.963619 30420 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 10:10:36.963896 master-0 kubenswrapper[30420]: I0318 10:10:36.963653 30420 state_mem.go:36] "Initialized new in-memory state store" Mar 18 10:10:36.963896 master-0 kubenswrapper[30420]: I0318 10:10:36.963875 30420 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 10:10:36.963956 master-0 kubenswrapper[30420]: I0318 10:10:36.963887 30420 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 10:10:36.963956 master-0 kubenswrapper[30420]: I0318 10:10:36.963906 30420 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 10:10:36.963956 master-0 kubenswrapper[30420]: I0318 10:10:36.963912 30420 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 10:10:36.963956 master-0 kubenswrapper[30420]: I0318 10:10:36.963918 30420 policy_none.go:49] "None policy: Start" Mar 18 10:10:36.968863 master-0 kubenswrapper[30420]: I0318 10:10:36.968809 30420 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 10:10:36.968945 master-0 kubenswrapper[30420]: I0318 10:10:36.968882 30420 state_mem.go:35] "Initializing new in-memory state store" Mar 18 10:10:36.969189 master-0 kubenswrapper[30420]: I0318 10:10:36.969172 30420 state_mem.go:75] "Updated machine memory state" Mar 18 10:10:36.969189 master-0 kubenswrapper[30420]: I0318 10:10:36.969187 30420 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 10:10:36.995690 master-0 kubenswrapper[30420]: I0318 10:10:36.995651 30420 manager.go:334] "Starting Device Plugin manager" Mar 18 10:10:36.995953 master-0 kubenswrapper[30420]: I0318 10:10:36.995718 30420 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 10:10:36.995953 master-0 kubenswrapper[30420]: I0318 10:10:36.995733 30420 server.go:79] "Starting device plugin registration server" Mar 18 10:10:36.996190 master-0 kubenswrapper[30420]: I0318 10:10:36.996096 30420 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 10:10:36.996190 master-0 kubenswrapper[30420]: I0318 10:10:36.996111 30420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 10:10:36.996399 master-0 kubenswrapper[30420]: I0318 10:10:36.996360 30420 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 10:10:36.996480 master-0 kubenswrapper[30420]: I0318 10:10:36.996463 30420 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 10:10:36.996480 master-0 kubenswrapper[30420]: I0318 10:10:36.996475 30420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 10:10:37.004691 master-0 kubenswrapper[30420]: E0318 10:10:37.004670 30420 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 10:10:37.097100 master-0 kubenswrapper[30420]: I0318 10:10:37.097059 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.099196 master-0 kubenswrapper[30420]: I0318 10:10:37.099171 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.099296 master-0 kubenswrapper[30420]: I0318 10:10:37.099208 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.099296 master-0 kubenswrapper[30420]: I0318 10:10:37.099220 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.099296 master-0 kubenswrapper[30420]: I0318 10:10:37.099242 30420 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 10:10:37.667510 master-0 kubenswrapper[30420]: I0318 10:10:37.667401 30420 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 10:10:37.667712 master-0 kubenswrapper[30420]: I0318 10:10:37.667552 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.670553 master-0 kubenswrapper[30420]: I0318 10:10:37.670508 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.670621 master-0 kubenswrapper[30420]: I0318 10:10:37.670566 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.670621 master-0 kubenswrapper[30420]: I0318 10:10:37.670583 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.670762 master-0 kubenswrapper[30420]: I0318 10:10:37.670733 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.670983 master-0 kubenswrapper[30420]: I0318 10:10:37.670951 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.674731 master-0 kubenswrapper[30420]: I0318 10:10:37.674685 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.674731 master-0 kubenswrapper[30420]: I0318 10:10:37.674727 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.674879 master-0 kubenswrapper[30420]: I0318 10:10:37.674741 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.674931 master-0 kubenswrapper[30420]: I0318 10:10:37.674887 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.674960 master-0 kubenswrapper[30420]: I0318 10:10:37.674938 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.674960 master-0 kubenswrapper[30420]: I0318 10:10:37.674956 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.675031 master-0 kubenswrapper[30420]: I0318 10:10:37.675009 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.675423 master-0 kubenswrapper[30420]: I0318 10:10:37.674914 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.678418 master-0 kubenswrapper[30420]: I0318 10:10:37.678382 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.678418 master-0 kubenswrapper[30420]: I0318 10:10:37.678413 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.678527 master-0 kubenswrapper[30420]: I0318 10:10:37.678425 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.679226 master-0 kubenswrapper[30420]: I0318 10:10:37.679038 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.679226 master-0 kubenswrapper[30420]: I0318 10:10:37.679066 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.679226 master-0 kubenswrapper[30420]: I0318 10:10:37.679078 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.679336 master-0 kubenswrapper[30420]: I0318 10:10:37.679274 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.679503 master-0 kubenswrapper[30420]: I0318 10:10:37.679468 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.683393 master-0 kubenswrapper[30420]: I0318 10:10:37.683357 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.683502 master-0 kubenswrapper[30420]: I0318 10:10:37.683399 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.683502 master-0 kubenswrapper[30420]: I0318 10:10:37.683414 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.686940 master-0 kubenswrapper[30420]: I0318 10:10:37.686808 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.687320 master-0 kubenswrapper[30420]: I0318 10:10:37.687273 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.687373 master-0 kubenswrapper[30420]: I0318 10:10:37.687326 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.687373 master-0 kubenswrapper[30420]: I0318 10:10:37.687339 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.687436 master-0 kubenswrapper[30420]: I0318 10:10:37.687386 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.690794 master-0 kubenswrapper[30420]: I0318 10:10:37.690756 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.690918 master-0 kubenswrapper[30420]: I0318 10:10:37.690799 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.690918 master-0 kubenswrapper[30420]: I0318 10:10:37.690812 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.694451 master-0 kubenswrapper[30420]: I0318 10:10:37.694421 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.694588 master-0 kubenswrapper[30420]: I0318 10:10:37.694574 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.694697 master-0 kubenswrapper[30420]: I0318 10:10:37.694680 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.694959 master-0 kubenswrapper[30420]: I0318 10:10:37.694938 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.695288 master-0 kubenswrapper[30420]: I0318 10:10:37.695251 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.698996 master-0 kubenswrapper[30420]: I0318 10:10:37.698938 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.698996 master-0 kubenswrapper[30420]: I0318 10:10:37.698989 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.698996 master-0 kubenswrapper[30420]: I0318 10:10:37.699000 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.699264 master-0 kubenswrapper[30420]: I0318 10:10:37.699150 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640"} Mar 18 10:10:37.699264 master-0 kubenswrapper[30420]: I0318 10:10:37.699207 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb"} Mar 18 10:10:37.699264 master-0 kubenswrapper[30420]: I0318 10:10:37.699216 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.699264 master-0 kubenswrapper[30420]: I0318 10:10:37.699237 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.699264 master-0 kubenswrapper[30420]: I0318 10:10:37.699247 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.699495 master-0 kubenswrapper[30420]: I0318 10:10:37.699333 30420 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 10:10:37.699623 master-0 kubenswrapper[30420]: I0318 10:10:37.699218 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3"} Mar 18 10:10:37.699755 master-0 kubenswrapper[30420]: I0318 10:10:37.699722 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerDied","Data":"51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289"} Mar 18 10:10:37.700008 master-0 kubenswrapper[30420]: I0318 10:10:37.699849 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"187a5eb02f6d39f4d5d17d569f5578af7e87c01c9503e828b0f618e0f62581eb"} Mar 18 10:10:37.701736 master-0 kubenswrapper[30420]: I0318 10:10:37.701163 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"3d919231c945d2ac76a2314ac90b86daaf0c5723053a078a52a777095897804e"} Mar 18 10:10:37.702030 master-0 kubenswrapper[30420]: I0318 10:10:37.702013 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"2cedaa526f8077c080292a77549e88acf42196916ed5bec8faa88ce6a3333a29"} Mar 18 10:10:37.702115 master-0 kubenswrapper[30420]: I0318 10:10:37.702099 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"8ab6de4ab6f7e15d15c92c129b4e4f727b4794a9b9d9c8fd458199859bb80c35"} Mar 18 10:10:37.702208 master-0 kubenswrapper[30420]: I0318 10:10:37.702191 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"8edce4e71cecfae4457a35520658e712853fe5f7943d0341fb4cb9cb34b170ac"} Mar 18 10:10:37.702290 master-0 kubenswrapper[30420]: I0318 10:10:37.702268 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"6dda24880c260c4a49380224f82bd0302255a57a9081e30246f7376aa462edaf"} Mar 18 10:10:37.702379 master-0 kubenswrapper[30420]: I0318 10:10:37.702363 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"4399c846d156fc9ec273e7482a7df69bd6d7ebd35bceea9ea824c44fc0dbb98b"} Mar 18 10:10:37.702465 master-0 kubenswrapper[30420]: I0318 10:10:37.702449 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"c51a160bfa16a28b74f81d311f303e209d7ed9b37be27ca1db9e534e7071f1af"} Mar 18 10:10:37.702560 master-0 kubenswrapper[30420]: I0318 10:10:37.702541 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"ec0a4a4a27c5788cf435e3f981e3abe7cd525b4f9b545a25440129af48eb261e"} Mar 18 10:10:37.702662 master-0 kubenswrapper[30420]: I0318 10:10:37.702643 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"9dc4baf2ee903f66ceacf214f401bab7bc4c01b6dec665d83f3584b31ae00f41"} Mar 18 10:10:37.702817 master-0 kubenswrapper[30420]: I0318 10:10:37.702800 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4eb3bb67999d4fed39987c312beb2bc06f47fac3b7fcdfdc48994c77752b8ad" Mar 18 10:10:37.702915 master-0 kubenswrapper[30420]: I0318 10:10:37.702904 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b455e2d76fdd49301fe2af949c3adea4b9e18edfc2b50e8b9cd691e2613e68a" Mar 18 10:10:37.703000 master-0 kubenswrapper[30420]: I0318 10:10:37.702990 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab6781799773a4bd269941acef201c1236103b10079655748dd8db69e5953242" Mar 18 10:10:37.703097 master-0 kubenswrapper[30420]: I0318 10:10:37.703053 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 10:10:37.703141 master-0 kubenswrapper[30420]: I0318 10:10:37.703118 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 10:10:37.703141 master-0 kubenswrapper[30420]: I0318 10:10:37.703135 30420 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 10:10:37.703225 master-0 kubenswrapper[30420]: I0318 10:10:37.703079 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97"} Mar 18 10:10:37.703288 master-0 kubenswrapper[30420]: I0318 10:10:37.703274 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530"} Mar 18 10:10:37.703346 master-0 kubenswrapper[30420]: I0318 10:10:37.703335 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e"} Mar 18 10:10:37.703442 master-0 kubenswrapper[30420]: I0318 10:10:37.703424 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"fdfbe791c7dc81669c0055767b2119c9a2cf184b178248ae50fb983ef7ccd9a8"} Mar 18 10:10:37.703535 master-0 kubenswrapper[30420]: I0318 10:10:37.703515 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"fd600b9af2d2390bce62bac606740fc4a23373db916a45bc5361be1ed164fee1"} Mar 18 10:10:37.703729 master-0 kubenswrapper[30420]: I0318 10:10:37.703710 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4889e117bb83c7e1a1800e9a36e897d1db0934994a8b13923df3be14b35ebb" Mar 18 10:10:37.703817 master-0 kubenswrapper[30420]: I0318 10:10:37.703804 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="175a7f574cdd0bb033854cd54eafd3c786bd342ffc7ec8cd013b6215f3ca1994" Mar 18 10:10:37.703927 master-0 kubenswrapper[30420]: I0318 10:10:37.703911 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027cb739429dc761a3f2ade604437810a5898c43151b24416d6963442db7ad65" Mar 18 10:10:37.703998 master-0 kubenswrapper[30420]: I0318 10:10:37.703988 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e692e8ac748487a3686bf48bba0af89ab5710b4a4e9840c96ef2c14535ec26e" Mar 18 10:10:37.704065 master-0 kubenswrapper[30420]: I0318 10:10:37.704056 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="248be0eef87c6987bd3e5849d27bf7120297d80837bfe7be2b2148ea06921d34" Mar 18 10:10:37.704213 master-0 kubenswrapper[30420]: I0318 10:10:37.704201 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823fdbbda6c3f662c8a7386983ae9bef843b30223cfc80549bf1fe24201c6148" Mar 18 10:10:37.704299 master-0 kubenswrapper[30420]: I0318 10:10:37.704288 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="401219e24c2bd7d9e48328027e1c78136e8f25304b76126b40b8362b04997723" Mar 18 10:10:37.704379 master-0 kubenswrapper[30420]: I0318 10:10:37.704369 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="306e8c3b294ebc0b6118bec332d25f893bead6bde2beb01fbece7b1ede0478ae" Mar 18 10:10:37.704478 master-0 kubenswrapper[30420]: I0318 10:10:37.704469 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44fff1e61adbaef01d35b3cb7a668fee655369026524529c8495c49a8dde5128" Mar 18 10:10:37.704537 master-0 kubenswrapper[30420]: I0318 10:10:37.704528 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b33ec4b21a843e83059f3a27a8bc8244c587a53368b1233d2c8ea0115ce547d" Mar 18 10:10:37.704595 master-0 kubenswrapper[30420]: I0318 10:10:37.704586 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412e9b55f8faac02229faa1064ae91e5d24b587483498fa55a3224e6f756199c" Mar 18 10:10:37.704659 master-0 kubenswrapper[30420]: I0318 10:10:37.704646 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"66dba26b707d8a7ef9a56c2e052eb81cdb6a21e228ccc4ca178ec7f65804ffae"} Mar 18 10:10:37.704728 master-0 kubenswrapper[30420]: I0318 10:10:37.704713 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"03355a5e2caa4496c4b10efd4243dd60c302d54b340a80972ebe3e5661f0dd6b"} Mar 18 10:10:37.704791 master-0 kubenswrapper[30420]: I0318 10:10:37.704779 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"9224601ea02fef06bc1cdd3b3456114c80416edba62803e0538093078c92a30f"} Mar 18 10:10:37.704883 master-0 kubenswrapper[30420]: I0318 10:10:37.704870 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"346bcdf104d2ea10327572091843ffc672c87624551d190458c48063f43a2f22"} Mar 18 10:10:37.704942 master-0 kubenswrapper[30420]: I0318 10:10:37.704930 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"86ad2fe80dc58ccabdc7ba9d7e52d68245236d6e0eab6c192777c1cb03777ee6"} Mar 18 10:10:37.705020 master-0 kubenswrapper[30420]: I0318 10:10:37.705007 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerDied","Data":"dbf2586f3189d0b8f9dc638d92901a45e6cf3cdbf23cf4bd198e6fe898ec14b2"} Mar 18 10:10:37.705106 master-0 kubenswrapper[30420]: I0318 10:10:37.705089 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"06e9465576405f83c2377274bbe7c9f80c7e1d2afadf9ee173551a2f7f95d786"} Mar 18 10:10:37.705204 master-0 kubenswrapper[30420]: I0318 10:10:37.705185 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"b4356aff744ddd84b751a19b6b1c926a7d4c3a2ecf0278ac7c42e1a78ef7db64"} Mar 18 10:10:37.705302 master-0 kubenswrapper[30420]: I0318 10:10:37.705283 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"d2de72527ed6923a5ffc8e75d557ea5b2e3fbc7f0f250aeb34b97b6d6a4b673b"} Mar 18 10:10:37.705383 master-0 kubenswrapper[30420]: I0318 10:10:37.705369 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"a88536111853576d542216418fa9e6a7c0a796244d77dbfb3568461d1ad235ad"} Mar 18 10:10:37.705466 master-0 kubenswrapper[30420]: I0318 10:10:37.705453 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"5f76a44720ca3021debcea94492e12e9442fdb9f8fbe338ee1965217a14109bd"} Mar 18 10:10:37.705538 master-0 kubenswrapper[30420]: I0318 10:10:37.705527 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918ed1f73d1c1442c0a8e7726a8b614353a7b30844e6305ebc1a1ba857285248" Mar 18 10:10:37.705596 master-0 kubenswrapper[30420]: I0318 10:10:37.705586 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c985fd1643f6c6fd8181176e1149d324515647d1a390abe33081b9ded6959a0f" Mar 18 10:10:41.525338 master-0 kubenswrapper[30420]: I0318 10:10:41.525270 30420 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 10:10:41.526372 master-0 kubenswrapper[30420]: I0318 10:10:41.525658 30420 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 10:10:41.527878 master-0 kubenswrapper[30420]: I0318 10:10:41.527719 30420 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 10:10:41.533948 master-0 kubenswrapper[30420]: I0318 10:10:41.533877 30420 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 10:10:41.539086 master-0 kubenswrapper[30420]: I0318 10:10:41.535721 30420 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.635996 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636064 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636093 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636125 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636147 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636166 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636190 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636209 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636233 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636251 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636268 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636286 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636317 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636335 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636353 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636371 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636396 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636423 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636449 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.640065 master-0 kubenswrapper[30420]: I0318 10:10:41.636470 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:41.737110 master-0 kubenswrapper[30420]: I0318 10:10:41.737032 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.737110 master-0 kubenswrapper[30420]: I0318 10:10:41.737098 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.737110 master-0 kubenswrapper[30420]: I0318 10:10:41.737120 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:41.737362 master-0 kubenswrapper[30420]: I0318 10:10:41.737245 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.737362 master-0 kubenswrapper[30420]: I0318 10:10:41.737342 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 10:10:41.737422 master-0 kubenswrapper[30420]: I0318 10:10:41.737373 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.737422 master-0 kubenswrapper[30420]: I0318 10:10:41.737402 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.737484 master-0 kubenswrapper[30420]: I0318 10:10:41.737424 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.737484 master-0 kubenswrapper[30420]: I0318 10:10:41.737445 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.737484 master-0 kubenswrapper[30420]: I0318 10:10:41.737465 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:41.737566 master-0 kubenswrapper[30420]: I0318 10:10:41.737486 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 10:10:41.737566 master-0 kubenswrapper[30420]: I0318 10:10:41.737508 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.737566 master-0 kubenswrapper[30420]: I0318 10:10:41.737532 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.737566 master-0 kubenswrapper[30420]: I0318 10:10:41.737553 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.737678 master-0 kubenswrapper[30420]: I0318 10:10:41.737575 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:41.737678 master-0 kubenswrapper[30420]: I0318 10:10:41.737597 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.737678 master-0 kubenswrapper[30420]: I0318 10:10:41.737616 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.737678 master-0 kubenswrapper[30420]: I0318 10:10:41.737640 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.737678 master-0 kubenswrapper[30420]: I0318 10:10:41.737659 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.737839 master-0 kubenswrapper[30420]: I0318 10:10:41.737680 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.737839 master-0 kubenswrapper[30420]: I0318 10:10:41.737703 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:41.737839 master-0 kubenswrapper[30420]: I0318 10:10:41.737737 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:41.737839 master-0 kubenswrapper[30420]: I0318 10:10:41.737772 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:41.737839 master-0 kubenswrapper[30420]: I0318 10:10:41.737801 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 10:10:41.737996 master-0 kubenswrapper[30420]: I0318 10:10:41.737848 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.737996 master-0 kubenswrapper[30420]: I0318 10:10:41.737880 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.737996 master-0 kubenswrapper[30420]: I0318 10:10:41.737910 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.737996 master-0 kubenswrapper[30420]: I0318 10:10:41.737940 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.737996 master-0 kubenswrapper[30420]: I0318 10:10:41.737966 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:41.737996 master-0 kubenswrapper[30420]: I0318 10:10:41.737994 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 10:10:41.738166 master-0 kubenswrapper[30420]: I0318 10:10:41.738023 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.738166 master-0 kubenswrapper[30420]: I0318 10:10:41.738052 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.738166 master-0 kubenswrapper[30420]: I0318 10:10:41.738080 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.738166 master-0 kubenswrapper[30420]: I0318 10:10:41.738107 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:41.738166 master-0 kubenswrapper[30420]: I0318 10:10:41.738138 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.738166 master-0 kubenswrapper[30420]: I0318 10:10:41.738164 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.738363 master-0 kubenswrapper[30420]: I0318 10:10:41.738193 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.738363 master-0 kubenswrapper[30420]: I0318 10:10:41.738224 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.738363 master-0 kubenswrapper[30420]: I0318 10:10:41.738279 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:10:41.738363 master-0 kubenswrapper[30420]: I0318 10:10:41.738338 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.892848 master-0 kubenswrapper[30420]: I0318 10:10:41.892776 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 10:10:41.901138 master-0 kubenswrapper[30420]: I0318 10:10:41.901086 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.901267 master-0 kubenswrapper[30420]: I0318 10:10:41.901204 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.906094 master-0 kubenswrapper[30420]: I0318 10:10:41.906046 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:41.906500 master-0 kubenswrapper[30420]: I0318 10:10:41.906451 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 10:10:42.092861 master-0 kubenswrapper[30420]: I0318 10:10:42.092591 30420 apiserver.go:52] "Watching apiserver" Mar 18 10:10:42.131848 master-0 kubenswrapper[30420]: I0318 10:10:42.121309 30420 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 10:10:42.139746 master-0 kubenswrapper[30420]: I0318 10:10:42.139112 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx","openshift-multus/multus-additional-cni-plugins-dg6dw","openshift-multus/network-metrics-daemon-tbxt4","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6","openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz","openshift-machine-config-operator/machine-config-server-9wnkm","openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l","openshift-marketplace/certified-operators-pdfn6","openshift-marketplace/community-operators-nzqck","openshift-monitoring/node-exporter-l9q9t","openshift-network-diagnostics/network-check-target-42l55","assisted-installer/assisted-installer-controller-ttq68","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq","openshift-dns/dns-default-z9sf5","openshift-network-operator/network-operator-7bd846bfc4-8srnz","openshift-etcd/installer-2-master-0","openshift-kube-scheduler/installer-6-master-0","openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg","openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw","openshift-dns/node-resolver-hjpz8","openshift-etcd/installer-1-master-0","openshift-marketplace/marketplace-operator-89ccd998f-2glpv","openshift-monitoring/metrics-server-74c475bc87-xx98m","openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q","openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k","openshift-apiserver/apiserver-687747fbb4-k7dnf","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt","openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh","openshift-network-operator/iptables-alerter-r7h65","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m","openshift-kube-apiserver/installer-1-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc","openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr","openshift-dns-operator/dns-operator-9c5679d8f-jrmkr","openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j","openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm","openshift-ingress/router-default-7dcf5569b5-82tbk","openshift-kube-apiserver/installer-4-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/installer-6-retry-1-master-0","openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv","openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr","openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s","openshift-config-operator/openshift-config-operator-95bf4f4d-495pg","openshift-network-node-identity/network-node-identity-7fl4x","openshift-service-ca/service-ca-79bc6b8d76-jjcsv","openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj","openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m","openshift-monitoring/prometheus-operator-6c8df6d4b-886k6","openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q","openshift-etcd/etcd-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq","openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm","openshift-ingress-canary/ingress-canary-rzksb","openshift-kube-apiserver/installer-4-retry-1-master-0","openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl","openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh","openshift-cluster-node-tuning-operator/tuned-6rhgt","openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t","openshift-marketplace/redhat-marketplace-8w5rc","openshift-insights/insights-operator-68bf6ff9d6-bdcw7","openshift-kube-controller-manager/installer-4-retry-1-master-0","openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68","openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx","openshift-marketplace/redhat-operators-jl7c8","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54","openshift-kube-controller-manager/installer-4-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9","openshift-machine-config-operator/machine-config-daemon-mtdk2","openshift-multus/multus-xgdvw","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c","openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb","openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s","openshift-ovn-kubernetes/ovnkube-node-frnfl","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx","openshift-kube-scheduler/installer-5-master-0","openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7","openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58","openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r","openshift-kube-controller-manager/installer-2-master-0","openshift-kube-controller-manager/installer-3-master-0"] Mar 18 10:10:42.142506 master-0 kubenswrapper[30420]: I0318 10:10:42.142468 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-ttq68" Mar 18 10:10:42.142808 master-0 kubenswrapper[30420]: I0318 10:10:42.142776 30420 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 10:10:42.142898 master-0 kubenswrapper[30420]: I0318 10:10:42.142871 30420 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 10:10:42.158179 master-0 kubenswrapper[30420]: I0318 10:10:42.156295 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.158179 master-0 kubenswrapper[30420]: I0318 10:10:42.156951 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 10:10:42.168259 master-0 kubenswrapper[30420]: I0318 10:10:42.167623 30420 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="12ad3f33-0f81-4684-b296-86becb421afc" Mar 18 10:10:42.168259 master-0 kubenswrapper[30420]: I0318 10:10:42.167721 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 10:10:42.168259 master-0 kubenswrapper[30420]: I0318 10:10:42.167920 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.168259 master-0 kubenswrapper[30420]: I0318 10:10:42.168041 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.168259 master-0 kubenswrapper[30420]: I0318 10:10:42.168170 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 10:10:42.168643 master-0 kubenswrapper[30420]: I0318 10:10:42.168339 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 10:10:42.170077 master-0 kubenswrapper[30420]: I0318 10:10:42.168770 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 10:10:42.173894 master-0 kubenswrapper[30420]: I0318 10:10:42.173867 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 10:10:42.174194 master-0 kubenswrapper[30420]: I0318 10:10:42.174160 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 10:10:42.176079 master-0 kubenswrapper[30420]: I0318 10:10:42.175596 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 10:10:42.186929 master-0 kubenswrapper[30420]: I0318 10:10:42.183720 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 10:10:42.186929 master-0 kubenswrapper[30420]: I0318 10:10:42.184324 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 10:10:42.188860 master-0 kubenswrapper[30420]: I0318 10:10:42.188798 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 10:10:42.189028 master-0 kubenswrapper[30420]: I0318 10:10:42.188786 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 10:10:42.194560 master-0 kubenswrapper[30420]: I0318 10:10:42.193971 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 10:10:42.194560 master-0 kubenswrapper[30420]: I0318 10:10:42.194029 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-retry-1-master-0" Mar 18 10:10:42.194560 master-0 kubenswrapper[30420]: I0318 10:10:42.194094 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 18 10:10:42.194560 master-0 kubenswrapper[30420]: I0318 10:10:42.194251 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 10:10:42.196965 master-0 kubenswrapper[30420]: I0318 10:10:42.196925 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 10:10:42.197793 master-0 kubenswrapper[30420]: I0318 10:10:42.197657 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 10:10:42.197933 master-0 kubenswrapper[30420]: I0318 10:10:42.197915 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.198630 master-0 kubenswrapper[30420]: I0318 10:10:42.198590 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 10:10:42.200905 master-0 kubenswrapper[30420]: I0318 10:10:42.198969 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 10:10:42.200905 master-0 kubenswrapper[30420]: I0318 10:10:42.199205 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 10:10:42.201599 master-0 kubenswrapper[30420]: I0318 10:10:42.201431 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 10:10:42.201674 master-0 kubenswrapper[30420]: I0318 10:10:42.201513 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 10:10:42.202237 master-0 kubenswrapper[30420]: I0318 10:10:42.201966 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 10:10:42.202237 master-0 kubenswrapper[30420]: I0318 10:10:42.202012 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.202333 master-0 kubenswrapper[30420]: I0318 10:10:42.202285 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 10:10:42.202703 master-0 kubenswrapper[30420]: I0318 10:10:42.202432 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.202703 master-0 kubenswrapper[30420]: I0318 10:10:42.202528 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 10:10:42.202703 master-0 kubenswrapper[30420]: I0318 10:10:42.202551 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 10:10:42.202703 master-0 kubenswrapper[30420]: I0318 10:10:42.202652 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 10:10:42.202703 master-0 kubenswrapper[30420]: I0318 10:10:42.202654 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.202703 master-0 kubenswrapper[30420]: I0318 10:10:42.202661 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.203207 master-0 kubenswrapper[30420]: I0318 10:10:42.202797 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 10:10:42.203207 master-0 kubenswrapper[30420]: I0318 10:10:42.202811 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.203207 master-0 kubenswrapper[30420]: I0318 10:10:42.202811 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 10:10:42.203207 master-0 kubenswrapper[30420]: I0318 10:10:42.202877 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.203207 master-0 kubenswrapper[30420]: I0318 10:10:42.202952 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 10:10:42.203207 master-0 kubenswrapper[30420]: I0318 10:10:42.203008 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 10:10:42.203914 master-0 kubenswrapper[30420]: I0318 10:10:42.203840 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 10:10:42.204463 master-0 kubenswrapper[30420]: I0318 10:10:42.203993 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.204463 master-0 kubenswrapper[30420]: I0318 10:10:42.204029 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 10:10:42.204463 master-0 kubenswrapper[30420]: I0318 10:10:42.204082 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 10:10:42.204463 master-0 kubenswrapper[30420]: I0318 10:10:42.204108 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 10:10:42.204463 master-0 kubenswrapper[30420]: I0318 10:10:42.204148 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 10:10:42.204463 master-0 kubenswrapper[30420]: I0318 10:10:42.204192 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 10:10:42.204463 master-0 kubenswrapper[30420]: I0318 10:10:42.204231 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 10:10:42.204463 master-0 kubenswrapper[30420]: I0318 10:10:42.204325 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 10:10:42.204463 master-0 kubenswrapper[30420]: I0318 10:10:42.204367 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 10:10:42.204860 master-0 kubenswrapper[30420]: I0318 10:10:42.204702 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.205221 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.205474 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.206063 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.206490 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.206780 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.206838 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.206926 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.207170 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.207306 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.207556 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.207874 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.207938 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.207986 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208019 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208049 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208097 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.207936 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208201 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208221 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208247 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.207991 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208336 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208367 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208382 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208394 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208521 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208565 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208637 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 10:10:42.208697 master-0 kubenswrapper[30420]: I0318 10:10:42.208663 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.208789 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.208813 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.207884 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209071 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209202 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209258 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209295 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209308 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209317 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209387 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209433 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209539 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209630 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.209707 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.210188 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.210744 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 10:10:42.211358 master-0 kubenswrapper[30420]: I0318 10:10:42.211019 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 10:10:42.215344 master-0 kubenswrapper[30420]: I0318 10:10:42.215292 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 10:10:42.215493 master-0 kubenswrapper[30420]: I0318 10:10:42.215454 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 10:10:42.215656 master-0 kubenswrapper[30420]: I0318 10:10:42.215622 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 10:10:42.215847 master-0 kubenswrapper[30420]: I0318 10:10:42.215800 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 10:10:42.216063 master-0 kubenswrapper[30420]: I0318 10:10:42.216031 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 10:10:42.216134 master-0 kubenswrapper[30420]: I0318 10:10:42.216108 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 10:10:42.216380 master-0 kubenswrapper[30420]: I0318 10:10:42.216348 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 10:10:42.216544 master-0 kubenswrapper[30420]: I0318 10:10:42.216524 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 10:10:42.217178 master-0 kubenswrapper[30420]: I0318 10:10:42.217132 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fac1b46a11e49501805e891baae4a9" path="/var/lib/kubelet/pods/49fac1b46a11e49501805e891baae4a9/volumes" Mar 18 10:10:42.217509 master-0 kubenswrapper[30420]: I0318 10:10:42.217473 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 10:10:42.217835 master-0 kubenswrapper[30420]: I0318 10:10:42.217796 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 10:10:42.225436 master-0 kubenswrapper[30420]: I0318 10:10:42.225385 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 10:10:42.227382 master-0 kubenswrapper[30420]: I0318 10:10:42.227182 30420 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 10:10:42.227767 master-0 kubenswrapper[30420]: I0318 10:10:42.227742 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 10:10:42.231452 master-0 kubenswrapper[30420]: I0318 10:10:42.231375 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 10:10:42.232251 master-0 kubenswrapper[30420]: I0318 10:10:42.232223 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 10:10:42.232838 master-0 kubenswrapper[30420]: I0318 10:10:42.232804 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 10:10:42.234765 master-0 kubenswrapper[30420]: I0318 10:10:42.234064 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 10:10:42.235924 master-0 kubenswrapper[30420]: I0318 10:10:42.235904 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 10:10:42.263571 master-0 kubenswrapper[30420]: I0318 10:10:42.263511 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-auth-proxy-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 10:10:42.263571 master-0 kubenswrapper[30420]: I0318 10:10:42.263554 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263590 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d72e695-0183-4ee8-8add-5425e67f7138-config\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263606 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5dk8\" (UniqueName: \"kubernetes.io/projected/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-kube-api-access-p5dk8\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263633 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-wtmp\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263649 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263666 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263682 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fjk8\" (UniqueName: \"kubernetes.io/projected/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-kube-api-access-9fjk8\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263697 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-multus-certs\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263714 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263731 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:42.263746 master-0 kubenswrapper[30420]: I0318 10:10:42.263747 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-multus\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263762 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-utilities\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263779 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-serving-cert\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263796 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxl7x\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-kube-api-access-kxl7x\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263812 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccdc221-4ec5-487e-8ec4-85284ed628d8-metrics-tls\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263847 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263867 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263884 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-node-bootstrap-token\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263902 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263920 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpk5h\" (UniqueName: \"kubernetes.io/projected/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-kube-api-access-gpk5h\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263938 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2chb\" (UniqueName: \"kubernetes.io/projected/8cb5158f-2199-42c0-995a-8490c9ec8a95-kube-api-access-p2chb\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263954 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-images\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263971 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-client\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.263991 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f69a00b6-d908-4485-bb0d-57594fc01d24-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.264006 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-certs\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.264023 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/d4d2218c-f9df-4d43-8727-ed3a920e23f7-kube-api-access-w4qp9\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.264039 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z459j\" (UniqueName: \"kubernetes.io/projected/43d54514-989c-4c82-93f9-153b44eacdd1-kube-api-access-z459j\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:42.264137 master-0 kubenswrapper[30420]: I0318 10:10:42.264053 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-netns\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.264757 master-0 kubenswrapper[30420]: I0318 10:10:42.264512 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-serving-cert\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.264916 master-0 kubenswrapper[30420]: I0318 10:10:42.264879 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d72e695-0183-4ee8-8add-5425e67f7138-config\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 10:10:42.265293 master-0 kubenswrapper[30420]: I0318 10:10:42.265255 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f69a00b6-d908-4485-bb0d-57594fc01d24-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 10:10:42.265532 master-0 kubenswrapper[30420]: I0318 10:10:42.265500 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-utilities\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:42.266609 master-0 kubenswrapper[30420]: I0318 10:10:42.265874 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ccdc221-4ec5-487e-8ec4-85284ed628d8-metrics-tls\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 10:10:42.266609 master-0 kubenswrapper[30420]: I0318 10:10:42.266127 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db52ca42-e458-407f-9eeb-bf6de6405edc-srv-cert\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 10:10:42.266609 master-0 kubenswrapper[30420]: I0318 10:10:42.266400 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:42.266782 master-0 kubenswrapper[30420]: I0318 10:10:42.266758 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f69a00b6-d908-4485-bb0d-57594fc01d24-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 10:10:42.269344 master-0 kubenswrapper[30420]: I0318 10:10:42.269294 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 10:10:42.269455 master-0 kubenswrapper[30420]: I0318 10:10:42.269342 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-kubelet\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.269455 master-0 kubenswrapper[30420]: I0318 10:10:42.269411 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.269455 master-0 kubenswrapper[30420]: I0318 10:10:42.269440 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-image-import-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.269601 master-0 kubenswrapper[30420]: I0318 10:10:42.269457 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq4rm\" (UniqueName: \"kubernetes.io/projected/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-kube-api-access-vq4rm\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 10:10:42.269601 master-0 kubenswrapper[30420]: I0318 10:10:42.269482 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtnxf\" (UniqueName: \"kubernetes.io/projected/5900a401-21c2-47f0-a921-47c648da558d-kube-api-access-qtnxf\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:42.269719 master-0 kubenswrapper[30420]: I0318 10:10:42.269579 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2635254-a491-42e5-b598-461c24bf77ca-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 10:10:42.269719 master-0 kubenswrapper[30420]: I0318 10:10:42.269672 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:42.269719 master-0 kubenswrapper[30420]: I0318 10:10:42.269701 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6948f93-b573-4f09-b754-aaa2269e2875-cache\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.269851 master-0 kubenswrapper[30420]: I0318 10:10:42.269728 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.269851 master-0 kubenswrapper[30420]: I0318 10:10:42.269811 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-config\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.270000 master-0 kubenswrapper[30420]: I0318 10:10:42.269970 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 10:10:42.270049 master-0 kubenswrapper[30420]: I0318 10:10:42.270012 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-hosts-file\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 10:10:42.270049 master-0 kubenswrapper[30420]: I0318 10:10:42.270045 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-client\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.270132 master-0 kubenswrapper[30420]: I0318 10:10:42.270065 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5e0836f-c0b4-40cd-9f63-55774da2740e-proxy-tls\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:42.270132 master-0 kubenswrapper[30420]: I0318 10:10:42.270087 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 10:10:42.270132 master-0 kubenswrapper[30420]: I0318 10:10:42.270123 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a616d-012a-479e-ab3d-b21295ea1805-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 10:10:42.270267 master-0 kubenswrapper[30420]: I0318 10:10:42.270146 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvx6m\" (UniqueName: \"kubernetes.io/projected/74476be5-669a-4737-b93b-c4870423a4da-kube-api-access-nvx6m\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 10:10:42.270267 master-0 kubenswrapper[30420]: I0318 10:10:42.270168 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-os-release\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.270267 master-0 kubenswrapper[30420]: I0318 10:10:42.270189 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-log-socket\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.270267 master-0 kubenswrapper[30420]: I0318 10:10:42.270213 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-node-pullsecrets\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.270267 master-0 kubenswrapper[30420]: I0318 10:10:42.270235 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f875878f-3588-42f1-9488-750d9f4582f8-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:10:42.270267 master-0 kubenswrapper[30420]: I0318 10:10:42.270257 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghd2r\" (UniqueName: \"kubernetes.io/projected/9ccdc221-4ec5-487e-8ec4-85284ed628d8-kube-api-access-ghd2r\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 10:10:42.270490 master-0 kubenswrapper[30420]: I0318 10:10:42.270278 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhzg4\" (UniqueName: \"kubernetes.io/projected/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-kube-api-access-lhzg4\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 10:10:42.270490 master-0 kubenswrapper[30420]: I0318 10:10:42.270302 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6qn5\" (UniqueName: \"kubernetes.io/projected/db376fea-5756-4bc2-9685-f32730b5a6f7-kube-api-access-r6qn5\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:42.270490 master-0 kubenswrapper[30420]: I0318 10:10:42.270327 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-slash\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.270490 master-0 kubenswrapper[30420]: I0318 10:10:42.270355 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-serving-cert\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.270490 master-0 kubenswrapper[30420]: I0318 10:10:42.270378 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-encryption-config\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.270490 master-0 kubenswrapper[30420]: I0318 10:10:42.270405 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94j4\" (UniqueName: \"kubernetes.io/projected/e5e0836f-c0b4-40cd-9f63-55774da2740e-kube-api-access-k94j4\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:42.270490 master-0 kubenswrapper[30420]: I0318 10:10:42.270429 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-config\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:42.270490 master-0 kubenswrapper[30420]: I0318 10:10:42.270453 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shbrj\" (UniqueName: \"kubernetes.io/projected/6f266bad-8b30-4300-ad93-9d48e61f2440-kube-api-access-shbrj\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:10:42.270490 master-0 kubenswrapper[30420]: I0318 10:10:42.270479 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmnjp\" (UniqueName: \"kubernetes.io/projected/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-kube-api-access-jmnjp\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270503 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59hld\" (UniqueName: \"kubernetes.io/projected/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-kube-api-access-59hld\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270527 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t77j8\" (UniqueName: \"kubernetes.io/projected/b0f77d68-f228-4f82-befb-fb2a2ce2e976-kube-api-access-t77j8\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270553 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-trusted-ca-bundle\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270580 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0999f781-3299-4cb6-ba76-2a4f4584c685-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270603 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270627 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270654 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4g9s\" (UniqueName: \"kubernetes.io/projected/196e7607-1ddf-467b-9901-b4be746130a1-kube-api-access-l4g9s\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270768 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270791 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.270859 master-0 kubenswrapper[30420]: I0318 10:10:42.270811 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0999f781-3299-4cb6-ba76-2a4f4584c685-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.270884 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb7tz\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-kube-api-access-tb7tz\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.270903 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.270924 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.270941 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.270960 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bql7p\" (UniqueName: \"kubernetes.io/projected/bdf80ddc-7c99-4f60-814b-ba98809ef41d-kube-api-access-bql7p\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.270978 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7qd\" (UniqueName: \"kubernetes.io/projected/f69a00b6-d908-4485-bb0d-57594fc01d24-kube-api-access-5r7qd\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.270996 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwfph\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-kube-api-access-nwfph\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.271014 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.270993 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-config\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.271031 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx9p2\" (UniqueName: \"kubernetes.io/projected/db52ca42-e458-407f-9eeb-bf6de6405edc-kube-api-access-jx9p2\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 10:10:42.271229 master-0 kubenswrapper[30420]: I0318 10:10:42.271049 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ktpl\" (UniqueName: \"kubernetes.io/projected/bb942756-bac7-414d-b179-cebdce588a13-kube-api-access-2ktpl\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:42.271675 master-0 kubenswrapper[30420]: I0318 10:10:42.271384 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6948f93-b573-4f09-b754-aaa2269e2875-cache\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.271675 master-0 kubenswrapper[30420]: I0318 10:10:42.271640 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2635254-a491-42e5-b598-461c24bf77ca-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 10:10:42.271805 master-0 kubenswrapper[30420]: I0318 10:10:42.271765 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 10:10:42.271878 master-0 kubenswrapper[30420]: I0318 10:10:42.271844 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5j9d\" (UniqueName: \"kubernetes.io/projected/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-kube-api-access-l5j9d\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:10:42.271878 master-0 kubenswrapper[30420]: I0318 10:10:42.271871 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-encryption-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.271958 master-0 kubenswrapper[30420]: I0318 10:10:42.271894 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-images\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:42.271958 master-0 kubenswrapper[30420]: I0318 10:10:42.271921 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a616d-012a-479e-ab3d-b21295ea1805-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 10:10:42.271958 master-0 kubenswrapper[30420]: I0318 10:10:42.271951 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scz6j\" (UniqueName: \"kubernetes.io/projected/f88c2a18-11f5-45ef-aff1-3c5976716d85-kube-api-access-scz6j\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 10:10:42.272076 master-0 kubenswrapper[30420]: I0318 10:10:42.271983 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x6ht\" (UniqueName: \"kubernetes.io/projected/0442ec6c-5973-40a5-a0c3-dc02de46d343-kube-api-access-5x6ht\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 10:10:42.272076 master-0 kubenswrapper[30420]: I0318 10:10:42.272008 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-config\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:42.272439 master-0 kubenswrapper[30420]: I0318 10:10:42.272201 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee376320-9ca0-444d-ab37-9cbcb6729b11-srv-cert\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 10:10:42.272439 master-0 kubenswrapper[30420]: I0318 10:10:42.272252 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-trusted-ca-bundle\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.272439 master-0 kubenswrapper[30420]: I0318 10:10:42.272340 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0999f781-3299-4cb6-ba76-2a4f4584c685-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 10:10:42.272595 master-0 kubenswrapper[30420]: I0318 10:10:42.272458 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-serving-cert\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.272595 master-0 kubenswrapper[30420]: I0318 10:10:42.272461 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 10:10:42.272595 master-0 kubenswrapper[30420]: I0318 10:10:42.272517 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d72e695-0183-4ee8-8add-5425e67f7138-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 10:10:42.272595 master-0 kubenswrapper[30420]: I0318 10:10:42.272581 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-encryption-config\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.272862 master-0 kubenswrapper[30420]: I0318 10:10:42.272606 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:42.272862 master-0 kubenswrapper[30420]: I0318 10:10:42.272553 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-client\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.272862 master-0 kubenswrapper[30420]: I0318 10:10:42.272651 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.272862 master-0 kubenswrapper[30420]: I0318 10:10:42.272705 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:42.272862 master-0 kubenswrapper[30420]: I0318 10:10:42.272800 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ee99294-4785-49d0-b493-0d734cf09396-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 10:10:42.272862 master-0 kubenswrapper[30420]: I0318 10:10:42.272853 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5900a401-21c2-47f0-a921-47c648da558d-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:42.273099 master-0 kubenswrapper[30420]: I0318 10:10:42.272880 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d72e695-0183-4ee8-8add-5425e67f7138-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 10:10:42.273099 master-0 kubenswrapper[30420]: I0318 10:10:42.272941 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmsjt\" (UniqueName: \"kubernetes.io/projected/1084562a-20a0-432d-b739-90bc0a4daff2-kube-api-access-qmsjt\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:42.273099 master-0 kubenswrapper[30420]: I0318 10:10:42.272976 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-stats-auth\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:42.273099 master-0 kubenswrapper[30420]: I0318 10:10:42.273006 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-metrics-certs\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:42.273099 master-0 kubenswrapper[30420]: I0318 10:10:42.273036 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-socket-dir-parent\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.273099 master-0 kubenswrapper[30420]: I0318 10:10:42.273064 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-daemon-config\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.273099 master-0 kubenswrapper[30420]: I0318 10:10:42.273063 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5900a401-21c2-47f0-a921-47c648da558d-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:42.273335 master-0 kubenswrapper[30420]: I0318 10:10:42.273094 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvnrf\" (UniqueName: \"kubernetes.io/projected/62b82d72-d73c-451a-84e1-551d73036aa8-kube-api-access-lvnrf\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 10:10:42.273335 master-0 kubenswrapper[30420]: I0318 10:10:42.273160 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257hk\" (UniqueName: \"kubernetes.io/projected/29490aed-9c97-42d1-94c8-44d1de13b70c-kube-api-access-257hk\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 10:10:42.273335 master-0 kubenswrapper[30420]: I0318 10:10:42.273194 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-config-volume\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 10:10:42.273335 master-0 kubenswrapper[30420]: I0318 10:10:42.273222 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xml27\" (UniqueName: \"kubernetes.io/projected/caec44dc-aab7-4407-b34a-52bbe4b4f635-kube-api-access-xml27\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 10:10:42.273335 master-0 kubenswrapper[30420]: I0318 10:10:42.273302 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.273479 master-0 kubenswrapper[30420]: I0318 10:10:42.273306 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 10:10:42.273479 master-0 kubenswrapper[30420]: I0318 10:10:42.273425 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-daemon-config\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.273479 master-0 kubenswrapper[30420]: I0318 10:10:42.273452 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-systemd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.273878 master-0 kubenswrapper[30420]: I0318 10:10:42.273591 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-cnibin\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.273878 master-0 kubenswrapper[30420]: I0318 10:10:42.273621 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-sys\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.273878 master-0 kubenswrapper[30420]: I0318 10:10:42.273656 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 10:10:42.273878 master-0 kubenswrapper[30420]: I0318 10:10:42.273679 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:42.273878 master-0 kubenswrapper[30420]: I0318 10:10:42.273704 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-catalog-content\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:10:42.273878 master-0 kubenswrapper[30420]: I0318 10:10:42.273766 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-etc-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.273878 master-0 kubenswrapper[30420]: I0318 10:10:42.273791 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ee99294-4785-49d0-b493-0d734cf09396-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 10:10:42.273878 master-0 kubenswrapper[30420]: I0318 10:10:42.273799 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-catalog-content\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:10:42.273878 master-0 kubenswrapper[30420]: I0318 10:10:42.273808 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:42.274168 master-0 kubenswrapper[30420]: I0318 10:10:42.274049 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-trusted-ca-bundle\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.274168 master-0 kubenswrapper[30420]: I0318 10:10:42.274089 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:42.274168 master-0 kubenswrapper[30420]: I0318 10:10:42.274118 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:42.274168 master-0 kubenswrapper[30420]: I0318 10:10:42.274147 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-system-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274212 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274236 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-config\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274261 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0605021-862d-424a-a4c1-037fb005b77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274315 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-conf\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274338 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274390 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274417 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-modprobe-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274470 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43d54514-989c-4c82-93f9-153b44eacdd1-service-ca-bundle\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274499 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274516 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274549 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274797 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0605021-862d-424a-a4c1-037fb005b77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274876 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bdf80ddc-7c99-4f60-814b-ba98809ef41d-tmpfs\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274913 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274948 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s54f9\" (UniqueName: \"kubernetes.io/projected/8e812dd9-cd05-4e9e-8710-d0920181ece2-kube-api-access-s54f9\") pod \"csi-snapshot-controller-operator-5f5d689c6b-mqbmq\" (UID: \"8e812dd9-cd05-4e9e-8710-d0920181ece2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.274973 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.275067 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-config\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.275126 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bdf80ddc-7c99-4f60-814b-ba98809ef41d-tmpfs\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.275141 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ee99294-4785-49d0-b493-0d734cf09396-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276029 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276066 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d89r9\" (UniqueName: \"kubernetes.io/projected/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-kube-api-access-d89r9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276100 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad4aa30-f7d5-47ca-b01e-2643f7195685-machine-approver-tls\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276124 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276147 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4hfd\" (UniqueName: \"kubernetes.io/projected/c2635254-a491-42e5-b598-461c24bf77ca-kube-api-access-p4hfd\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276162 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276172 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276437 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8cb5158f-2199-42c0-995a-8490c9ec8a95-metrics-tls\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276481 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276519 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv8x5\" (UniqueName: \"kubernetes.io/projected/932a70df-3afe-4873-9449-ab6e061d3fe3-kube-api-access-fv8x5\") pod \"csi-snapshot-controller-64854d9cff-2l6cq\" (UID: \"932a70df-3afe-4873-9449-ab6e061d3fe3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276575 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hww8g\" (UniqueName: \"kubernetes.io/projected/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae-kube-api-access-hww8g\") pod \"migrator-8487694857-8tqwj\" (UID: \"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276605 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0999f781-3299-4cb6-ba76-2a4f4584c685-config\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276659 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlxfz\" (UniqueName: \"kubernetes.io/projected/bb35841e-d992-4044-aaaa-06c9faf47bd0-kube-api-access-zlxfz\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276689 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-catalog-content\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276743 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276773 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-lib-modules\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276850 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-549bq\" (UniqueName: \"kubernetes.io/projected/0c7b317c-d141-4e69-9c82-4a5dda6c3248-kube-api-access-549bq\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276884 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71755097-7543-48f8-8925-0e21650bf8f6-serving-cert\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276935 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0999f781-3299-4cb6-ba76-2a4f4584c685-config\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.276943 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277056 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-catalog-content\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277181 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277223 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6bvr\" (UniqueName: \"kubernetes.io/projected/0d72e695-0183-4ee8-8add-5425e67f7138-kube-api-access-g6bvr\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277283 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277342 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9sq\" (UniqueName: \"kubernetes.io/projected/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-kube-api-access-wj9sq\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277363 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277374 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-netd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277426 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-tmp\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277459 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cnibin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277516 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-utilities\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277580 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovn-node-metrics-cert\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277626 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/71755097-7543-48f8-8925-0e21650bf8f6-snapshots\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277681 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277686 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277694 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-utilities\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277769 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f076eaf0-b041-4db0-ba06-3d85e23bb654-serving-cert\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277779 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-tmp\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277790 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/71755097-7543-48f8-8925-0e21650bf8f6-snapshots\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277879 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-dir\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277912 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.277938 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f5c64aa-676e-4e48-b714-02f6edb1d361-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278024 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f076eaf0-b041-4db0-ba06-3d85e23bb654-serving-cert\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278053 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cert\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278092 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/a078565a-6970-4f42-84f4-938f1d637245-kube-api-access-cxv6v\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278118 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-run\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278143 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-key\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278183 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278208 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-federate-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278232 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278404 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5e0836f-c0b4-40cd-9f63-55774da2740e-mcd-auth-proxy-config\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278443 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/29fbc78b-1887-40d4-8165-f0f7cc40b583-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278467 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278411 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-key\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278490 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-serving-ca\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278560 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278581 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-etcd-client\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278599 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278629 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278650 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 10:10:42.278562 master-0 kubenswrapper[30420]: I0318 10:10:42.278668 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a616d-012a-479e-ab3d-b21295ea1805-config\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.278684 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-config\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.278704 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.278722 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-webhook-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.278741 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-utilities\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.278758 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit-dir\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.278768 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-etcd-serving-ca\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.278775 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-systemd\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.278987 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/accc57fb-75f5-4f89-9804-6ede7f77e27c-metrics-tls\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279016 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-policies\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279034 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-catalog-content\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279054 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v8jq\" (UniqueName: \"kubernetes.io/projected/1cb8ab19-0564-4182-a7e3-0943c1480663-kube-api-access-4v8jq\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279077 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279173 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0945a421-d7c4-46df-b3d9-507443627d51-utilities\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279221 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279339 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-policies\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279373 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db376fea-5756-4bc2-9685-f32730b5a6f7-catalog-content\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279598 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279723 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a616d-012a-479e-ab3d-b21295ea1805-config\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279777 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-config\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279869 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-node-log\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.279903 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-kubernetes\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280071 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-metrics-tls\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280069 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a078565a-6970-4f42-84f4-938f1d637245-etcd-client\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280128 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9ccdc221-4ec5-487e-8ec4-85284ed628d8-host-etc-kube\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280174 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb35841e-d992-4044-aaaa-06c9faf47bd0-serving-cert\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280203 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280232 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280269 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-env-overrides\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280296 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn7zt\" (UniqueName: \"kubernetes.io/projected/f875878f-3588-42f1-9488-750d9f4582f8-kube-api-access-nn7zt\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280321 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-tuned\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280345 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-cabundle\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280371 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/caec44dc-aab7-4407-b34a-52bbe4b4f635-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280398 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280421 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280445 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280471 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280496 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-conf-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280523 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-catalog-content\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280550 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkvcs\" (UniqueName: \"kubernetes.io/projected/af1bbeee-1faf-43d1-943f-ee5319cef4e9-kube-api-access-nkvcs\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280577 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9cfd2323-c33a-4d80-9c25-710920c0e605-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280602 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqx6m\" (UniqueName: \"kubernetes.io/projected/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-kube-api-access-fqx6m\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280625 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280648 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-etc-kubernetes\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280673 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/582d2ba8-1210-47d0-a530-0b20b2fdde22-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4wcqx\" (UID: \"582d2ba8-1210-47d0-a530-0b20b2fdde22\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280697 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-config\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280720 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280746 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280773 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-env-overrides\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280796 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-host\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280839 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a6a616d-012a-479e-ab3d-b21295ea1805-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280866 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw4s4\" (UniqueName: \"kubernetes.io/projected/8b906fc0-f2bf-4586-97e6-921bbd467b65-kube-api-access-rw4s4\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280889 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-k8s-cni-cncf-io\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280914 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcj8f\" (UniqueName: \"kubernetes.io/projected/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-kube-api-access-hcj8f\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280938 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280961 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.280987 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281013 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281040 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzzjs\" (UniqueName: \"kubernetes.io/projected/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-kube-api-access-wzzjs\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281229 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmffc\" (UniqueName: \"kubernetes.io/projected/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-kube-api-access-gmffc\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281260 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281287 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29490aed-9c97-42d1-94c8-44d1de13b70c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281300 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb35841e-d992-4044-aaaa-06c9faf47bd0-serving-cert\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281312 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-serving-cert\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281352 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-bin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281380 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-bound-sa-token\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281402 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25k9g\" (UniqueName: \"kubernetes.io/projected/ee376320-9ca0-444d-ab37-9cbcb6729b11-kube-api-access-25k9g\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281576 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-env-overrides\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281592 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f076eaf0-b041-4db0-ba06-3d85e23bb654-config\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281693 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-tuned\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281870 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-signing-cabundle\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.281894 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.282035 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-catalog-content\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:42.282227 master-0 kubenswrapper[30420]: I0318 10:10:42.282112 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0605021-862d-424a-a4c1-037fb005b77e-env-overrides\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.282334 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-serving-cert\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.282403 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.282646 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.282681 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.282845 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-root\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.282861 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.282921 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283031 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-default-certificate\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283042 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d2218c-f9df-4d43-8727-ed3a920e23f7-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283084 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e5e0836f-c0b4-40cd-9f63-55774da2740e-rootfs\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283120 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2g9q\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-kube-api-access-t2g9q\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283146 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283166 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvhfc\" (UniqueName: \"kubernetes.io/projected/71755097-7543-48f8-8925-0e21650bf8f6-kube-api-access-qvhfc\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283228 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxj5c\" (UniqueName: \"kubernetes.io/projected/d0605021-862d-424a-a4c1-037fb005b77e-kube-api-access-cxj5c\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283254 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283288 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-os-release\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283309 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f88c2a18-11f5-45ef-aff1-3c5976716d85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283330 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/accc57fb-75f5-4f89-9804-6ede7f77e27c-trusted-ca\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283351 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283348 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a078565a-6970-4f42-84f4-938f1d637245-etcd-ca\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283401 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-audit-log\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283426 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283466 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-ovn\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283481 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-audit-log\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283501 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af1bbeee-1faf-43d1-943f-ee5319cef4e9-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283558 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/accc57fb-75f5-4f89-9804-6ede7f77e27c-trusted-ca\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283578 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91331360-dc70-45bb-a815-e00664bae6c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283604 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysconfig\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283632 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-var-lib-kubelet\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283659 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74476be5-669a-4737-b93b-c4870423a4da-cert\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283683 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cni-binary-copy\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283709 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-utilities\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283735 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-cache\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283765 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xttqt\" (UniqueName: \"kubernetes.io/projected/9f5c64aa-676e-4e48-b714-02f6edb1d361-kube-api-access-xttqt\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283784 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-utilities\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283790 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-bin\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283846 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blfkg\" (UniqueName: \"kubernetes.io/projected/9cfd2323-c33a-4d80-9c25-710920c0e605-kube-api-access-blfkg\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283855 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-cache\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283881 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp8vt\" (UniqueName: \"kubernetes.io/projected/1ad4aa30-f7d5-47ca-b01e-2643f7195685-kube-api-access-fp8vt\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283911 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb35841e-d992-4044-aaaa-06c9faf47bd0-config\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283936 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283970 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-textfile\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.283981 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cni-binary-copy\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284000 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284040 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284073 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-script-lib\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284091 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-textfile\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284113 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb35841e-d992-4044-aaaa-06c9faf47bd0-config\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284176 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmxj9\" (UniqueName: \"kubernetes.io/projected/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-kube-api-access-gmxj9\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284204 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-kubelet\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284228 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/432f611b-a1a2-4cc9-b005-17a16413d281-kube-api-access\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284276 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284289 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovnkube-script-lib\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284300 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284315 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0442ec6c-5973-40a5-a0c3-dc02de46d343-metrics-certs\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284327 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.284308 master-0 kubenswrapper[30420]: I0318 10:10:42.284357 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/caec44dc-aab7-4407-b34a-52bbe4b4f635-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284377 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cb8ab19-0564-4182-a7e3-0943c1480663-metrics-client-ca\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284396 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-netns\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284454 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-serving-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284481 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284490 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-serving-cert\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284519 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284556 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284575 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2635254-a491-42e5-b598-461c24bf77ca-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284587 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284627 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284653 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284658 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxf74\" (UniqueName: \"kubernetes.io/projected/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-kube-api-access-sxf74\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284704 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284724 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-system-cni-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284741 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-hostroot\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284762 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284780 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-systemd-units\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284799 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-operand-assets\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284816 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284865 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-images\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284880 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-operand-assets\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284891 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btrk\" (UniqueName: \"kubernetes.io/projected/2d014721-ed53-447a-b737-c496bbba18be-kube-api-access-4btrk\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284918 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-var-lib-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284954 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x46bf\" (UniqueName: \"kubernetes.io/projected/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-kube-api-access-x46bf\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.284984 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285015 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285044 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62b82d72-d73c-451a-84e1-551d73036aa8-host-slash\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285162 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f266bad-8b30-4300-ad93-9d48e61f2440-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285224 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2nt\" (UniqueName: \"kubernetes.io/projected/29fbc78b-1887-40d4-8165-f0f7cc40b583-kube-api-access-vm2nt\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285250 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285326 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-serving-certs-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285356 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-metrics-client-ca\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285379 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285394 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-sys\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285451 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/432f611b-a1a2-4cc9-b005-17a16413d281-serving-cert\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285477 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/432f611b-a1a2-4cc9-b005-17a16413d281-service-ca\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285496 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285517 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b46jq\" (UniqueName: \"kubernetes.io/projected/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-kube-api-access-b46jq\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285539 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k29kr\" (UniqueName: \"kubernetes.io/projected/0945a421-d7c4-46df-b3d9-507443627d51-kube-api-access-k29kr\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285564 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4qbs\" (UniqueName: \"kubernetes.io/projected/aaadd000-4db7-4264-bfc1-b0ad63c8fb05-kube-api-access-v4qbs\") pod \"network-check-source-b4bf74f6-4kpnv\" (UID: \"aaadd000-4db7-4264-bfc1-b0ad63c8fb05\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285585 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8sl\" (UniqueName: \"kubernetes.io/projected/91331360-dc70-45bb-a815-e00664bae6c4-kube-api-access-8w8sl\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285665 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f25pg\" (UniqueName: \"kubernetes.io/projected/f076eaf0-b041-4db0-ba06-3d85e23bb654-kube-api-access-f25pg\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:42.286520 master-0 kubenswrapper[30420]: I0318 10:10:42.285717 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-apiservice-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:42.297130 master-0 kubenswrapper[30420]: I0318 10:10:42.296817 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 10:10:42.317374 master-0 kubenswrapper[30420]: I0318 10:10:42.317306 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 10:10:42.336593 master-0 kubenswrapper[30420]: I0318 10:10:42.336294 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 10:10:42.338788 master-0 kubenswrapper[30420]: I0318 10:10:42.338746 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-ovn-node-metrics-cert\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.356150 master-0 kubenswrapper[30420]: I0318 10:10:42.356100 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 10:10:42.376395 master-0 kubenswrapper[30420]: I0318 10:10:42.376066 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 10:10:42.386974 master-0 kubenswrapper[30420]: I0318 10:10:42.386896 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.386974 master-0 kubenswrapper[30420]: I0318 10:10:42.386968 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-os-release\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.387230 master-0 kubenswrapper[30420]: I0318 10:10:42.387032 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-ovn\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.387230 master-0 kubenswrapper[30420]: I0318 10:10:42.387041 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.387230 master-0 kubenswrapper[30420]: I0318 10:10:42.387060 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysconfig\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.387230 master-0 kubenswrapper[30420]: I0318 10:10:42.387088 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-var-lib-kubelet\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.387230 master-0 kubenswrapper[30420]: I0318 10:10:42.387181 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysconfig\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.387384 master-0 kubenswrapper[30420]: I0318 10:10:42.387319 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-ovn\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.387548 master-0 kubenswrapper[30420]: I0318 10:10:42.387507 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:10:42.387592 master-0 kubenswrapper[30420]: I0318 10:10:42.387540 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-var-lib-kubelet\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.387592 master-0 kubenswrapper[30420]: I0318 10:10:42.387563 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-bin\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.387648 master-0 kubenswrapper[30420]: I0318 10:10:42.387595 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:10:42.387648 master-0 kubenswrapper[30420]: I0318 10:10:42.387601 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-bin\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.387648 master-0 kubenswrapper[30420]: I0318 10:10:42.387556 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-os-release\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.387648 master-0 kubenswrapper[30420]: I0318 10:10:42.387622 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-kubelet\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.387764 master-0 kubenswrapper[30420]: I0318 10:10:42.387666 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-kubelet\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.387764 master-0 kubenswrapper[30420]: I0318 10:10:42.387686 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.387764 master-0 kubenswrapper[30420]: I0318 10:10:42.387755 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-netns\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.387884 master-0 kubenswrapper[30420]: I0318 10:10:42.387845 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.387925 master-0 kubenswrapper[30420]: I0318 10:10:42.387896 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.388224 master-0 kubenswrapper[30420]: I0318 10:10:42.387932 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-netns\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.388224 master-0 kubenswrapper[30420]: I0318 10:10:42.387942 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-system-cni-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.388224 master-0 kubenswrapper[30420]: I0318 10:10:42.387992 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-system-cni-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.388224 master-0 kubenswrapper[30420]: I0318 10:10:42.388008 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.388224 master-0 kubenswrapper[30420]: I0318 10:10:42.388013 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-hostroot\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.388224 master-0 kubenswrapper[30420]: I0318 10:10:42.388047 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-hostroot\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.388224 master-0 kubenswrapper[30420]: I0318 10:10:42.388084 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-systemd-units\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.388224 master-0 kubenswrapper[30420]: I0318 10:10:42.388178 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-var-lib-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.388578 master-0 kubenswrapper[30420]: I0318 10:10:42.388242 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62b82d72-d73c-451a-84e1-551d73036aa8-host-slash\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 10:10:42.388578 master-0 kubenswrapper[30420]: I0318 10:10:42.388346 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/62b82d72-d73c-451a-84e1-551d73036aa8-host-slash\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 10:10:42.388578 master-0 kubenswrapper[30420]: I0318 10:10:42.388346 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-var-lib-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.388578 master-0 kubenswrapper[30420]: I0318 10:10:42.388392 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.388578 master-0 kubenswrapper[30420]: I0318 10:10:42.388438 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-sys\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.388578 master-0 kubenswrapper[30420]: I0318 10:10:42.388502 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.388578 master-0 kubenswrapper[30420]: I0318 10:10:42.388554 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-sys\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.388578 master-0 kubenswrapper[30420]: I0318 10:10:42.388141 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-systemd-units\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.388801 master-0 kubenswrapper[30420]: I0318 10:10:42.388673 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-wtmp\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.388801 master-0 kubenswrapper[30420]: I0318 10:10:42.388734 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-multus-certs\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.388947 master-0 kubenswrapper[30420]: I0318 10:10:42.388814 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-multus-certs\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.388947 master-0 kubenswrapper[30420]: I0318 10:10:42.388855 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-multus\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.388947 master-0 kubenswrapper[30420]: I0318 10:10:42.388857 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-wtmp\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.388947 master-0 kubenswrapper[30420]: I0318 10:10:42.388901 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-multus\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.388947 master-0 kubenswrapper[30420]: I0318 10:10:42.388948 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.389171 master-0 kubenswrapper[30420]: I0318 10:10:42.389024 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-netns\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.389171 master-0 kubenswrapper[30420]: I0318 10:10:42.389061 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-netns\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.389171 master-0 kubenswrapper[30420]: I0318 10:10:42.389127 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.389171 master-0 kubenswrapper[30420]: I0318 10:10:42.389133 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-kubelet\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.389171 master-0 kubenswrapper[30420]: I0318 10:10:42.389170 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-kubelet\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.389309 master-0 kubenswrapper[30420]: I0318 10:10:42.389252 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.389309 master-0 kubenswrapper[30420]: I0318 10:10:42.389295 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-hosts-file\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 10:10:42.389424 master-0 kubenswrapper[30420]: I0318 10:10:42.389388 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-hosts-file\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 10:10:42.389481 master-0 kubenswrapper[30420]: I0318 10:10:42.389459 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-os-release\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.389520 master-0 kubenswrapper[30420]: I0318 10:10:42.389461 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b6948f93-b573-4f09-b754-aaa2269e2875-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.389554 master-0 kubenswrapper[30420]: I0318 10:10:42.389522 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-log-socket\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.389554 master-0 kubenswrapper[30420]: I0318 10:10:42.389543 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-os-release\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.389675 master-0 kubenswrapper[30420]: I0318 10:10:42.389552 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-node-pullsecrets\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.389675 master-0 kubenswrapper[30420]: I0318 10:10:42.389596 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-node-pullsecrets\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.389675 master-0 kubenswrapper[30420]: I0318 10:10:42.389624 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-log-socket\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.389675 master-0 kubenswrapper[30420]: I0318 10:10:42.389640 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-slash\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.390113 master-0 kubenswrapper[30420]: I0318 10:10:42.389719 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-slash\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.390113 master-0 kubenswrapper[30420]: I0318 10:10:42.390004 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:42.390113 master-0 kubenswrapper[30420]: I0318 10:10:42.390087 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-socket-dir-parent\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.390230 master-0 kubenswrapper[30420]: I0318 10:10:42.390147 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:42.390230 master-0 kubenswrapper[30420]: I0318 10:10:42.390153 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-socket-dir-parent\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.390230 master-0 kubenswrapper[30420]: I0318 10:10:42.390203 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-systemd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.390344 master-0 kubenswrapper[30420]: I0318 10:10:42.390228 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-cnibin\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.390344 master-0 kubenswrapper[30420]: I0318 10:10:42.390244 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-systemd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.390344 master-0 kubenswrapper[30420]: I0318 10:10:42.390254 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-sys\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.390344 master-0 kubenswrapper[30420]: I0318 10:10:42.390302 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-etc-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.390344 master-0 kubenswrapper[30420]: I0318 10:10:42.390302 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-cnibin\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.390502 master-0 kubenswrapper[30420]: I0318 10:10:42.390352 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-sys\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.390502 master-0 kubenswrapper[30420]: I0318 10:10:42.390396 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-system-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.390502 master-0 kubenswrapper[30420]: I0318 10:10:42.390412 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-etc-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.390502 master-0 kubenswrapper[30420]: I0318 10:10:42.390433 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-conf\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.390502 master-0 kubenswrapper[30420]: I0318 10:10:42.390469 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:10:42.390666 master-0 kubenswrapper[30420]: I0318 10:10:42.390506 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-system-cni-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.390666 master-0 kubenswrapper[30420]: I0318 10:10:42.390512 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-modprobe-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.390666 master-0 kubenswrapper[30420]: I0318 10:10:42.390572 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-sysctl-conf\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.390666 master-0 kubenswrapper[30420]: I0318 10:10:42.390604 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:10:42.390666 master-0 kubenswrapper[30420]: I0318 10:10:42.390610 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.390666 master-0 kubenswrapper[30420]: I0318 10:10:42.390659 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.390909 master-0 kubenswrapper[30420]: I0318 10:10:42.390607 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-modprobe-d\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.390909 master-0 kubenswrapper[30420]: I0318 10:10:42.390710 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.390909 master-0 kubenswrapper[30420]: I0318 10:10:42.390833 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-run-ovn-kubernetes\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.391004 master-0 kubenswrapper[30420]: I0318 10:10:42.390918 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-lib-modules\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.391004 master-0 kubenswrapper[30420]: I0318 10:10:42.390977 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:42.391059 master-0 kubenswrapper[30420]: I0318 10:10:42.391017 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:42.391100 master-0 kubenswrapper[30420]: I0318 10:10:42.391071 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/432f611b-a1a2-4cc9-b005-17a16413d281-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:42.391100 master-0 kubenswrapper[30420]: I0318 10:10:42.391086 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:42.391167 master-0 kubenswrapper[30420]: I0318 10:10:42.391100 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-netd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.391167 master-0 kubenswrapper[30420]: I0318 10:10:42.391115 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-lib-modules\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.391167 master-0 kubenswrapper[30420]: I0318 10:10:42.391124 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cnibin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.391167 master-0 kubenswrapper[30420]: I0318 10:10:42.391162 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-cnibin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.391295 master-0 kubenswrapper[30420]: I0318 10:10:42.391165 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-host-cni-netd\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.391295 master-0 kubenswrapper[30420]: I0318 10:10:42.391205 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-dir\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.391295 master-0 kubenswrapper[30420]: I0318 10:10:42.391278 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-run\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.391381 master-0 kubenswrapper[30420]: I0318 10:10:42.391212 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b906fc0-f2bf-4586-97e6-921bbd467b65-audit-dir\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:42.391472 master-0 kubenswrapper[30420]: I0318 10:10:42.391429 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.391506 master-0 kubenswrapper[30420]: I0318 10:10:42.391476 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit-dir\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.391506 master-0 kubenswrapper[30420]: I0318 10:10:42.391432 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-run\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.391563 master-0 kubenswrapper[30420]: I0318 10:10:42.391503 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91331360-dc70-45bb-a815-e00664bae6c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:42.391563 master-0 kubenswrapper[30420]: I0318 10:10:42.391511 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-systemd\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.391563 master-0 kubenswrapper[30420]: I0318 10:10:42.391526 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit-dir\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:42.391652 master-0 kubenswrapper[30420]: I0318 10:10:42.391564 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.391652 master-0 kubenswrapper[30420]: I0318 10:10:42.391598 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-node-log\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.391712 master-0 kubenswrapper[30420]: I0318 10:10:42.391653 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-systemd\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.391712 master-0 kubenswrapper[30420]: I0318 10:10:42.391661 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-node-log\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.391712 master-0 kubenswrapper[30420]: I0318 10:10:42.391699 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-run-openvswitch\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:42.391794 master-0 kubenswrapper[30420]: I0318 10:10:42.391737 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-kubernetes\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.391794 master-0 kubenswrapper[30420]: I0318 10:10:42.391784 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9ccdc221-4ec5-487e-8ec4-85284ed628d8-host-etc-kube\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 10:10:42.391909 master-0 kubenswrapper[30420]: I0318 10:10:42.391883 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9ccdc221-4ec5-487e-8ec4-85284ed628d8-host-etc-kube\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 10:10:42.391958 master-0 kubenswrapper[30420]: I0318 10:10:42.391920 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-conf-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.391958 master-0 kubenswrapper[30420]: I0318 10:10:42.391944 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-etc-kubernetes\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.392041 master-0 kubenswrapper[30420]: I0318 10:10:42.391974 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-etc-kubernetes\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.392041 master-0 kubenswrapper[30420]: I0318 10:10:42.392004 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-etc-kubernetes\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.392041 master-0 kubenswrapper[30420]: I0318 10:10:42.392004 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-multus-conf-dir\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.392153 master-0 kubenswrapper[30420]: I0318 10:10:42.392072 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-host\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.392153 master-0 kubenswrapper[30420]: I0318 10:10:42.392134 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-k8s-cni-cncf-io\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.392231 master-0 kubenswrapper[30420]: I0318 10:10:42.392181 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0f77d68-f228-4f82-befb-fb2a2ce2e976-host\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:42.392276 master-0 kubenswrapper[30420]: I0318 10:10:42.392249 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-bin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.392276 master-0 kubenswrapper[30420]: I0318 10:10:42.392252 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-run-k8s-cni-cncf-io\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.392352 master-0 kubenswrapper[30420]: I0318 10:10:42.392282 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-host-var-lib-cni-bin\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:42.392389 master-0 kubenswrapper[30420]: I0318 10:10:42.392357 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-root\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.392419 master-0 kubenswrapper[30420]: I0318 10:10:42.392384 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1cb8ab19-0564-4182-a7e3-0943c1480663-root\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:42.392452 master-0 kubenswrapper[30420]: I0318 10:10:42.392441 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e5e0836f-c0b4-40cd-9f63-55774da2740e-rootfs\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:42.392549 master-0 kubenswrapper[30420]: I0318 10:10:42.392518 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e5e0836f-c0b4-40cd-9f63-55774da2740e-rootfs\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:42.397140 master-0 kubenswrapper[30420]: I0318 10:10:42.397097 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 10:10:42.416291 master-0 kubenswrapper[30420]: I0318 10:10:42.416219 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 10:10:42.437362 master-0 kubenswrapper[30420]: I0318 10:10:42.437198 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 10:10:42.457659 master-0 kubenswrapper[30420]: I0318 10:10:42.457572 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-84shv" Mar 18 10:10:42.487259 master-0 kubenswrapper[30420]: I0318 10:10:42.478990 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 10:10:42.508657 master-0 kubenswrapper[30420]: I0318 10:10:42.508591 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 10:10:42.520570 master-0 kubenswrapper[30420]: I0318 10:10:42.519847 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:42.522269 master-0 kubenswrapper[30420]: I0318 10:10:42.522214 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 10:10:42.538594 master-0 kubenswrapper[30420]: I0318 10:10:42.536578 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 10:10:42.551409 master-0 kubenswrapper[30420]: I0318 10:10:42.551359 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.578867 master-0 kubenswrapper[30420]: I0318 10:10:42.571067 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 10:10:42.578867 master-0 kubenswrapper[30420]: I0318 10:10:42.577164 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 10:10:42.586232 master-0 kubenswrapper[30420]: I0318 10:10:42.586167 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:42.604358 master-0 kubenswrapper[30420]: I0318 10:10:42.600782 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 10:10:42.606874 master-0 kubenswrapper[30420]: I0318 10:10:42.606091 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43d54514-989c-4c82-93f9-153b44eacdd1-service-ca-bundle\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:42.616644 master-0 kubenswrapper[30420]: I0318 10:10:42.616593 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 10:10:42.623424 master-0 kubenswrapper[30420]: I0318 10:10:42.623377 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-default-certificate\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:42.639791 master-0 kubenswrapper[30420]: I0318 10:10:42.639646 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 10:10:42.647550 master-0 kubenswrapper[30420]: I0318 10:10:42.647497 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-stats-auth\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:42.657615 master-0 kubenswrapper[30420]: I0318 10:10:42.657555 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 10:10:42.665433 master-0 kubenswrapper[30420]: I0318 10:10:42.665366 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43d54514-989c-4c82-93f9-153b44eacdd1-metrics-certs\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:42.688859 master-0 kubenswrapper[30420]: I0318 10:10:42.677614 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 10:10:42.705738 master-0 kubenswrapper[30420]: I0318 10:10:42.705676 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 10:10:42.715347 master-0 kubenswrapper[30420]: I0318 10:10:42.715295 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 10:10:42.723734 master-0 kubenswrapper[30420]: I0318 10:10:42.721325 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 10:10:42.723734 master-0 kubenswrapper[30420]: I0318 10:10:42.723711 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/582d2ba8-1210-47d0-a530-0b20b2fdde22-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4wcqx\" (UID: \"582d2ba8-1210-47d0-a530-0b20b2fdde22\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 10:10:42.752974 master-0 kubenswrapper[30420]: I0318 10:10:42.750559 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 10:10:42.756437 master-0 kubenswrapper[30420]: I0318 10:10:42.756377 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 10:10:42.757857 master-0 kubenswrapper[30420]: I0318 10:10:42.757609 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad4aa30-f7d5-47ca-b01e-2643f7195685-machine-approver-tls\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 10:10:42.768320 master-0 kubenswrapper[30420]: I0318 10:10:42.768261 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:10:42.777923 master-0 kubenswrapper[30420]: I0318 10:10:42.777781 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-wqrrj" Mar 18 10:10:42.797104 master-0 kubenswrapper[30420]: I0318 10:10:42.796897 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 10:10:42.805572 master-0 kubenswrapper[30420]: I0318 10:10:42.805523 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-auth-proxy-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 10:10:42.831862 master-0 kubenswrapper[30420]: I0318 10:10:42.830093 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 10:10:42.840860 master-0 kubenswrapper[30420]: I0318 10:10:42.836367 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad4aa30-f7d5-47ca-b01e-2643f7195685-config\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 10:10:42.840860 master-0 kubenswrapper[30420]: I0318 10:10:42.840253 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 10:10:42.865857 master-0 kubenswrapper[30420]: I0318 10:10:42.865278 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 10:10:42.866078 master-0 kubenswrapper[30420]: I0318 10:10:42.865864 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 10:10:42.866078 master-0 kubenswrapper[30420]: I0318 10:10:42.865911 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 10:10:42.870902 master-0 kubenswrapper[30420]: I0318 10:10:42.870148 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5e0836f-c0b4-40cd-9f63-55774da2740e-mcd-auth-proxy-config\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:42.877278 master-0 kubenswrapper[30420]: I0318 10:10:42.877235 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 10:10:42.884156 master-0 kubenswrapper[30420]: I0318 10:10:42.884120 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-config-volume\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 10:10:42.898599 master-0 kubenswrapper[30420]: I0318 10:10:42.898117 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 10:10:42.909159 master-0 kubenswrapper[30420]: I0318 10:10:42.909108 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-var-lock\") pod \"a3657106-1eea-4031-8c92-85ba6287b425\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " Mar 18 10:10:42.909322 master-0 kubenswrapper[30420]: I0318 10:10:42.909224 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-kubelet-dir\") pod \"a3657106-1eea-4031-8c92-85ba6287b425\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " Mar 18 10:10:42.909322 master-0 kubenswrapper[30420]: I0318 10:10:42.909243 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-var-lock" (OuterVolumeSpecName: "var-lock") pod "a3657106-1eea-4031-8c92-85ba6287b425" (UID: "a3657106-1eea-4031-8c92-85ba6287b425"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:42.909428 master-0 kubenswrapper[30420]: I0318 10:10:42.909361 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3657106-1eea-4031-8c92-85ba6287b425" (UID: "a3657106-1eea-4031-8c92-85ba6287b425"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:10:42.910741 master-0 kubenswrapper[30420]: I0318 10:10:42.910708 30420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:42.910741 master-0 kubenswrapper[30420]: I0318 10:10:42.910734 30420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3657106-1eea-4031-8c92-85ba6287b425-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:42.916352 master-0 kubenswrapper[30420]: I0318 10:10:42.916313 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-gllg9" Mar 18 10:10:42.936183 master-0 kubenswrapper[30420]: I0318 10:10:42.936128 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 10:10:42.940864 master-0 kubenswrapper[30420]: I0318 10:10:42.940810 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-metrics-tls\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 10:10:42.956295 master-0 kubenswrapper[30420]: I0318 10:10:42.956231 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2w9kv" Mar 18 10:10:42.976292 master-0 kubenswrapper[30420]: I0318 10:10:42.976200 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-fr2b8" Mar 18 10:10:42.996606 master-0 kubenswrapper[30420]: I0318 10:10:42.996514 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 10:10:43.016569 master-0 kubenswrapper[30420]: I0318 10:10:43.016503 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 10:10:43.037992 master-0 kubenswrapper[30420]: I0318 10:10:43.037929 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-hvm64" Mar 18 10:10:43.056736 master-0 kubenswrapper[30420]: I0318 10:10:43.056671 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-cgqlv" Mar 18 10:10:43.076936 master-0 kubenswrapper[30420]: I0318 10:10:43.076871 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 10:10:43.086199 master-0 kubenswrapper[30420]: I0318 10:10:43.086139 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d014721-ed53-447a-b737-c496bbba18be-images\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 10:10:43.096606 master-0 kubenswrapper[30420]: I0318 10:10:43.096550 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-vsqqr" Mar 18 10:10:43.116309 master-0 kubenswrapper[30420]: I0318 10:10:43.116249 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 10:10:43.122181 master-0 kubenswrapper[30420]: I0318 10:10:43.122141 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/caec44dc-aab7-4407-b34a-52bbe4b4f635-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 10:10:43.154727 master-0 kubenswrapper[30420]: I0318 10:10:43.154369 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 10:10:43.156730 master-0 kubenswrapper[30420]: I0318 10:10:43.155863 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/caec44dc-aab7-4407-b34a-52bbe4b4f635-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 10:10:43.160410 master-0 kubenswrapper[30420]: I0318 10:10:43.156943 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 10:10:43.175997 master-0 kubenswrapper[30420]: I0318 10:10:43.175946 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 10:10:43.186372 master-0 kubenswrapper[30420]: I0318 10:10:43.186097 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/432f611b-a1a2-4cc9-b005-17a16413d281-serving-cert\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:43.194540 master-0 kubenswrapper[30420]: I0318 10:10:43.194501 30420 request.go:700] Waited for 1.017909613s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 18 10:10:43.195723 master-0 kubenswrapper[30420]: I0318 10:10:43.195696 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 10:10:43.208188 master-0 kubenswrapper[30420]: I0318 10:10:43.208137 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/432f611b-a1a2-4cc9-b005-17a16413d281-service-ca\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:43.217677 master-0 kubenswrapper[30420]: I0318 10:10:43.216118 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 10:10:43.236251 master-0 kubenswrapper[30420]: I0318 10:10:43.236208 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 10:10:43.256498 master-0 kubenswrapper[30420]: I0318 10:10:43.256310 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-lrdkh" Mar 18 10:10:43.272186 master-0 kubenswrapper[30420]: E0318 10:10:43.272128 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.272186 master-0 kubenswrapper[30420]: E0318 10:10:43.272155 30420 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272180 30420 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272220 30420 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272244 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272128 30420 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272153 30420 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272280 30420 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272212 30420 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272304 30420 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272320 30420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-113q5nsjog6km: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272331 30420 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272340 30420 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272225 30420 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272331 30420 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272276 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.77222139 +0000 UTC m=+7.824967319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272383 30420 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272377 30420 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.272412 master-0 kubenswrapper[30420]: E0318 10:10:43.272413 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-image-import-ca podName:0c7b317c-d141-4e69-9c82-4a5dda6c3248 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772390194 +0000 UTC m=+7.825136233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-image-import-ca") pod "apiserver-687747fbb4-k7dnf" (UID: "0c7b317c-d141-4e69-9c82-4a5dda6c3248") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272434 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config podName:8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772423785 +0000 UTC m=+7.825169834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config") pod "route-controller-manager-5657df7dd8-4pp68" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272447 30420 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272452 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca podName:8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772443715 +0000 UTC m=+7.825189644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca") pod "route-controller-manager-5657df7dd8-4pp68" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272469 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-kube-rbac-proxy-config podName:9cfd2323-c33a-4d80-9c25-710920c0e605 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772461626 +0000 UTC m=+7.825207675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-886k6" (UID: "9cfd2323-c33a-4d80-9c25-710920c0e605") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272487 30420 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272496 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-kube-rbac-proxy-config podName:1cb8ab19-0564-4182-a7e3-0943c1480663 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772487006 +0000 UTC m=+7.825232935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-kube-rbac-proxy-config") pod "node-exporter-l9q9t" (UID: "1cb8ab19-0564-4182-a7e3-0943c1480663") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272512 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca podName:9fc664ff-2e8f-441d-82dc-8f21c1d362d7 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772504827 +0000 UTC m=+7.825250896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca") pod "controller-manager-6c87d45bb4-vxcx9" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272530 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-config podName:1084562a-20a0-432d-b739-90bc0a4daff2 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772521557 +0000 UTC m=+7.825267606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-config") pod "cluster-baremetal-operator-6f69995874-lnq7l" (UID: "1084562a-20a0-432d-b739-90bc0a4daff2") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272538 30420 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272550 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-cloud-controller-manager-operator-tls podName:8641c1d1-dd79-4f1f-9343-52d1ee6faf9f nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772541918 +0000 UTC m=+7.825287847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-rpxn4" (UID: "8641c1d1-dd79-4f1f-9343-52d1ee6faf9f") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272574 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls podName:2d014721-ed53-447a-b737-c496bbba18be nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772565258 +0000 UTC m=+7.825311317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls") pod "machine-config-operator-84d549f6d5-gnl5t" (UID: "2d014721-ed53-447a-b737-c496bbba18be") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272575 30420 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272594 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772587339 +0000 UTC m=+7.825333388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272698 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-node-bootstrap-token podName:196e7607-1ddf-467b-9901-b4be746130a1 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772653281 +0000 UTC m=+7.825399290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-node-bootstrap-token") pod "machine-config-server-9wnkm" (UID: "196e7607-1ddf-467b-9901-b4be746130a1") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272741 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-certs podName:196e7607-1ddf-467b-9901-b4be746130a1 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772727102 +0000 UTC m=+7.825473161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-certs") pod "machine-config-server-9wnkm" (UID: "196e7607-1ddf-467b-9901-b4be746130a1") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272765 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-client podName:0c7b317c-d141-4e69-9c82-4a5dda6c3248 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772754393 +0000 UTC m=+7.825500452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-client") pod "apiserver-687747fbb4-k7dnf" (UID: "0c7b317c-d141-4e69-9c82-4a5dda6c3248") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272788 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-images podName:1084562a-20a0-432d-b739-90bc0a4daff2 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772776744 +0000 UTC m=+7.825522793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-images") pod "cluster-baremetal-operator-6f69995874-lnq7l" (UID: "1084562a-20a0-432d-b739-90bc0a4daff2") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272787 30420 configmap.go:193] Couldn't get configMap openshift-network-node-identity/ovnkube-identity-cm: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272814 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-encryption-config podName:0c7b317c-d141-4e69-9c82-4a5dda6c3248 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772803324 +0000 UTC m=+7.825549373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-encryption-config") pod "apiserver-687747fbb4-k7dnf" (UID: "0c7b317c-d141-4e69-9c82-4a5dda6c3248") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272859 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert podName:9fc664ff-2e8f-441d-82dc-8f21c1d362d7 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772847866 +0000 UTC m=+7.825593905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert") pod "controller-manager-6c87d45bb4-vxcx9" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272882 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit podName:0c7b317c-d141-4e69-9c82-4a5dda6c3248 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772872006 +0000 UTC m=+7.825618055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit") pod "apiserver-687747fbb4-k7dnf" (UID: "0c7b317c-d141-4e69-9c82-4a5dda6c3248") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272907 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-images podName:29fbc78b-1887-40d4-8165-f0f7cc40b583 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772896547 +0000 UTC m=+7.825642606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-images") pod "machine-api-operator-6fbb6cf6f9-xnvn9" (UID: "29fbc78b-1887-40d4-8165-f0f7cc40b583") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272938 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5e0836f-c0b4-40cd-9f63-55774da2740e-proxy-tls podName:e5e0836f-c0b4-40cd-9f63-55774da2740e nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772928098 +0000 UTC m=+7.825674157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e5e0836f-c0b4-40cd-9f63-55774da2740e-proxy-tls") pod "machine-config-daemon-mtdk2" (UID: "e5e0836f-c0b4-40cd-9f63-55774da2740e") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272958 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-config podName:29fbc78b-1887-40d4-8165-f0f7cc40b583 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772949458 +0000 UTC m=+7.825695507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-config") pod "machine-api-operator-6fbb6cf6f9-xnvn9" (UID: "29fbc78b-1887-40d4-8165-f0f7cc40b583") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272980 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm podName:bb942756-bac7-414d-b179-cebdce588a13 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.772969229 +0000 UTC m=+7.825715288 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-identity-cm" (UniqueName: "kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm") pod "network-node-identity-7fl4x" (UID: "bb942756-bac7-414d-b179-cebdce588a13") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.272984 30420 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.273097 master-0 kubenswrapper[30420]: E0318 10:10:43.273059 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f875878f-3588-42f1-9488-750d9f4582f8-webhook-certs podName:f875878f-3588-42f1-9488-750d9f4582f8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.773048561 +0000 UTC m=+7.825794610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f875878f-3588-42f1-9488-750d9f4582f8-webhook-certs") pod "multus-admission-controller-58c9f8fc64-ssnvh" (UID: "f875878f-3588-42f1-9488-750d9f4582f8") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274170 30420 configmap.go:193] Couldn't get configMap openshift-network-node-identity/env-overrides: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274221 30420 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274229 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides podName:bb942756-bac7-414d-b179-cebdce588a13 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.77421586 +0000 UTC m=+7.826961879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides") pod "network-node-identity-7fl4x" (UID: "bb942756-bac7-414d-b179-cebdce588a13") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274280 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274285 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-trusted-ca-bundle podName:0c7b317c-d141-4e69-9c82-4a5dda6c3248 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.774274101 +0000 UTC m=+7.827020110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-trusted-ca-bundle") pod "apiserver-687747fbb4-k7dnf" (UID: "0c7b317c-d141-4e69-9c82-4a5dda6c3248") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274267 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274318 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-tls podName:9cfd2323-c33a-4d80-9c25-710920c0e605 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.774305522 +0000 UTC m=+7.827051581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-886k6" (UID: "9cfd2323-c33a-4d80-9c25-710920c0e605") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274327 30420 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274334 30420 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274337 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.774326653 +0000 UTC m=+7.827072702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.274443 master-0 kubenswrapper[30420]: E0318 10:10:43.274449 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert podName:bb942756-bac7-414d-b179-cebdce588a13 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.774438575 +0000 UTC m=+7.827184614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert") pod "network-node-identity-7fl4x" (UID: "bb942756-bac7-414d-b179-cebdce588a13") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.275079 master-0 kubenswrapper[30420]: E0318 10:10:43.274468 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cluster-baremetal-operator-tls podName:1084562a-20a0-432d-b739-90bc0a4daff2 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.774459776 +0000 UTC m=+7.827205825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-lnq7l" (UID: "1084562a-20a0-432d-b739-90bc0a4daff2") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.275079 master-0 kubenswrapper[30420]: E0318 10:10:43.274926 30420 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.275079 master-0 kubenswrapper[30420]: E0318 10:10:43.275019 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-tls podName:af1bbeee-1faf-43d1-943f-ee5319cef4e9 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.774997009 +0000 UTC m=+7.827743008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-6rrn7" (UID: "af1bbeee-1faf-43d1-943f-ee5319cef4e9") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.275373 master-0 kubenswrapper[30420]: E0318 10:10:43.275327 30420 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.275430 master-0 kubenswrapper[30420]: E0318 10:10:43.275417 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-kube-rbac-proxy-config podName:af1bbeee-1faf-43d1-943f-ee5319cef4e9 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.775396009 +0000 UTC m=+7.828141998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-5dc6c74576-6rrn7" (UID: "af1bbeee-1faf-43d1-943f-ee5319cef4e9") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.276585 master-0 kubenswrapper[30420]: E0318 10:10:43.276413 30420 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.276585 master-0 kubenswrapper[30420]: E0318 10:10:43.276487 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-config podName:0c7b317c-d141-4e69-9c82-4a5dda6c3248 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.776474277 +0000 UTC m=+7.829220286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-config") pod "apiserver-687747fbb4-k7dnf" (UID: "0c7b317c-d141-4e69-9c82-4a5dda6c3248") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.276585 master-0 kubenswrapper[30420]: E0318 10:10:43.276527 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.276585 master-0 kubenswrapper[30420]: E0318 10:10:43.276569 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle podName:aa4cba67-b5d4-46c2-8cad-1a1379f764cb nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.776560069 +0000 UTC m=+7.829305998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle") pod "telemeter-client-585cb8cdb6-g2jjm" (UID: "aa4cba67-b5d4-46c2-8cad-1a1379f764cb") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.276981 master-0 kubenswrapper[30420]: E0318 10:10:43.276744 30420 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.276981 master-0 kubenswrapper[30420]: E0318 10:10:43.276816 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-trusted-ca-bundle podName:71755097-7543-48f8-8925-0e21650bf8f6 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.776800235 +0000 UTC m=+7.829546234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-trusted-ca-bundle") pod "insights-operator-68bf6ff9d6-bdcw7" (UID: "71755097-7543-48f8-8925-0e21650bf8f6") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.276981 master-0 kubenswrapper[30420]: I0318 10:10:43.276833 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 10:10:43.278213 master-0 kubenswrapper[30420]: E0318 10:10:43.278015 30420 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.278213 master-0 kubenswrapper[30420]: E0318 10:10:43.278056 30420 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.278213 master-0 kubenswrapper[30420]: E0318 10:10:43.278112 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles podName:9fc664ff-2e8f-441d-82dc-8f21c1d362d7 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.778083407 +0000 UTC m=+7.830829406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles") pod "controller-manager-6c87d45bb4-vxcx9" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.278213 master-0 kubenswrapper[30420]: E0318 10:10:43.278114 30420 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.278213 master-0 kubenswrapper[30420]: E0318 10:10:43.278144 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71755097-7543-48f8-8925-0e21650bf8f6-serving-cert podName:71755097-7543-48f8-8925-0e21650bf8f6 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.778130708 +0000 UTC m=+7.830876737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71755097-7543-48f8-8925-0e21650bf8f6-serving-cert") pod "insights-operator-68bf6ff9d6-bdcw7" (UID: "71755097-7543-48f8-8925-0e21650bf8f6") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.279890 master-0 kubenswrapper[30420]: E0318 10:10:43.278155 30420 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.279954 master-0 kubenswrapper[30420]: E0318 10:10:43.278198 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert podName:8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.778180519 +0000 UTC m=+7.830926498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert") pod "route-controller-manager-5657df7dd8-4pp68" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.279954 master-0 kubenswrapper[30420]: E0318 10:10:43.279946 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f5c64aa-676e-4e48-b714-02f6edb1d361-auth-proxy-config podName:9f5c64aa-676e-4e48-b714-02f6edb1d361 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.779931713 +0000 UTC m=+7.832677642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/9f5c64aa-676e-4e48-b714-02f6edb1d361-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-mw9tt" (UID: "9f5c64aa-676e-4e48-b714-02f6edb1d361") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.279954 master-0 kubenswrapper[30420]: E0318 10:10:43.279949 30420 configmap.go:193] Couldn't get configMap openshift-network-operator/iptables-alerter-script: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.278191 30420 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.279247 30420 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.279980 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.280007 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29fbc78b-1887-40d4-8165-f0f7cc40b583-machine-api-operator-tls podName:29fbc78b-1887-40d4-8165-f0f7cc40b583 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.779994935 +0000 UTC m=+7.832740964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/29fbc78b-1887-40d4-8165-f0f7cc40b583-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-xnvn9" (UID: "29fbc78b-1887-40d4-8165-f0f7cc40b583") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.280025 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-custom-resource-state-configmap podName:5900a401-21c2-47f0-a921-47c648da558d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.780016286 +0000 UTC m=+7.832762345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-8tbkg" (UID: "5900a401-21c2-47f0-a921-47c648da558d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.279259 30420 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.280038 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script podName:62b82d72-d73c-451a-84e1-551d73036aa8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.780031606 +0000 UTC m=+7.832777535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "iptables-alerter-script" (UniqueName: "kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script") pod "iptables-alerter-r7h65" (UID: "62b82d72-d73c-451a-84e1-551d73036aa8") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.279269 30420 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.280050 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cert podName:1084562a-20a0-432d-b739-90bc0a4daff2 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.780044476 +0000 UTC m=+7.832790405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cert") pod "cluster-baremetal-operator-6f69995874-lnq7l" (UID: "1084562a-20a0-432d-b739-90bc0a4daff2") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.280057 master-0 kubenswrapper[30420]: E0318 10:10:43.280065 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-client-tls podName:aa4cba67-b5d4-46c2-8cad-1a1379f764cb nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.780060887 +0000 UTC m=+7.832806806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-client-tls") pod "telemeter-client-585cb8cdb6-g2jjm" (UID: "aa4cba67-b5d4-46c2-8cad-1a1379f764cb") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.280398 master-0 kubenswrapper[30420]: E0318 10:10:43.280076 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-federate-client-tls podName:aa4cba67-b5d4-46c2-8cad-1a1379f764cb nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.780071417 +0000 UTC m=+7.832817346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-federate-client-tls") pod "telemeter-client-585cb8cdb6-g2jjm" (UID: "aa4cba67-b5d4-46c2-8cad-1a1379f764cb") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.280398 master-0 kubenswrapper[30420]: E0318 10:10:43.279284 30420 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.280398 master-0 kubenswrapper[30420]: E0318 10:10:43.280108 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-webhook-cert podName:bdf80ddc-7c99-4f60-814b-ba98809ef41d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.780099258 +0000 UTC m=+7.832845317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-webhook-cert") pod "packageserver-7b64dcc66c-2vx58" (UID: "bdf80ddc-7c99-4f60-814b-ba98809ef41d") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.282183 master-0 kubenswrapper[30420]: E0318 10:10:43.281969 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.282183 master-0 kubenswrapper[30420]: E0318 10:10:43.282032 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-metrics-client-ca podName:5900a401-21c2-47f0-a921-47c648da558d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.782014346 +0000 UTC m=+7.834760345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-metrics-client-ca") pod "kube-state-metrics-7bbc969446-8tbkg" (UID: "5900a401-21c2-47f0-a921-47c648da558d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.282183 master-0 kubenswrapper[30420]: E0318 10:10:43.282073 30420 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.282183 master-0 kubenswrapper[30420]: E0318 10:10:43.282103 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-service-ca-bundle podName:71755097-7543-48f8-8925-0e21650bf8f6 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.782094568 +0000 UTC m=+7.834840607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-service-ca-bundle") pod "insights-operator-68bf6ff9d6-bdcw7" (UID: "71755097-7543-48f8-8925-0e21650bf8f6") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.282183 master-0 kubenswrapper[30420]: E0318 10:10:43.282133 30420 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.282183 master-0 kubenswrapper[30420]: E0318 10:10:43.282176 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-images podName:8641c1d1-dd79-4f1f-9343-52d1ee6faf9f nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.78216776 +0000 UTC m=+7.834913689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-images") pod "cluster-cloud-controller-manager-operator-7dff898856-rpxn4" (UID: "8641c1d1-dd79-4f1f-9343-52d1ee6faf9f") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.282406 master-0 kubenswrapper[30420]: E0318 10:10:43.282210 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.282406 master-0 kubenswrapper[30420]: E0318 10:10:43.282258 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9cfd2323-c33a-4d80-9c25-710920c0e605-metrics-client-ca podName:9cfd2323-c33a-4d80-9c25-710920c0e605 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.782247362 +0000 UTC m=+7.834993351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/9cfd2323-c33a-4d80-9c25-710920c0e605-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-886k6" (UID: "9cfd2323-c33a-4d80-9c25-710920c0e605") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.283355 master-0 kubenswrapper[30420]: E0318 10:10:43.283259 30420 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283355 master-0 kubenswrapper[30420]: E0318 10:10:43.283283 30420 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283355 master-0 kubenswrapper[30420]: E0318 10:10:43.283316 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29490aed-9c97-42d1-94c8-44d1de13b70c-cluster-storage-operator-serving-cert podName:29490aed-9c97-42d1-94c8-44d1de13b70c nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.783303628 +0000 UTC m=+7.836049647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/29490aed-9c97-42d1-94c8-44d1de13b70c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-7d87854d6-4kr54" (UID: "29490aed-9c97-42d1-94c8-44d1de13b70c") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283355 master-0 kubenswrapper[30420]: E0318 10:10:43.283306 30420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283355 master-0 kubenswrapper[30420]: E0318 10:10:43.283339 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls podName:1cb8ab19-0564-4182-a7e3-0943c1480663 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.783329169 +0000 UTC m=+7.836075218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls") pod "node-exporter-l9q9t" (UID: "1cb8ab19-0564-4182-a7e3-0943c1480663") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283354 30420 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283378 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.78336454 +0000 UTC m=+7.836110539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283378 30420 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283330 30420 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283407 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config podName:9fc664ff-2e8f-441d-82dc-8f21c1d362d7 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.78339258 +0000 UTC m=+7.836138599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config") pod "controller-manager-6c87d45bb4-vxcx9" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283425 30420 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283367 30420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283431 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-samples-operator-tls podName:ce65f61f-8e3a-47d5-ac12-ad4ab05d2850 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.783419891 +0000 UTC m=+7.836165920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-5dg2r" (UID: "ce65f61f-8e3a-47d5-ac12-ad4ab05d2850") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283455 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert podName:9f5c64aa-676e-4e48-b714-02f6edb1d361 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.783446472 +0000 UTC m=+7.836192521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert") pod "cluster-autoscaler-operator-866dc4744-mw9tt" (UID: "9f5c64aa-676e-4e48-b714-02f6edb1d361") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283468 30420 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283478 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config podName:5900a401-21c2-47f0-a921-47c648da558d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.783466452 +0000 UTC m=+7.836212501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-8tbkg" (UID: "5900a401-21c2-47f0-a921-47c648da558d") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283494 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.783487923 +0000 UTC m=+7.836233962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.283534 master-0 kubenswrapper[30420]: E0318 10:10:43.283512 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f88c2a18-11f5-45ef-aff1-3c5976716d85-control-plane-machine-set-operator-tls podName:f88c2a18-11f5-45ef-aff1-3c5976716d85 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.783503623 +0000 UTC m=+7.836249652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/f88c2a18-11f5-45ef-aff1-3c5976716d85-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zcm5j" (UID: "f88c2a18-11f5-45ef-aff1-3c5976716d85") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284583 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284590 30420 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284633 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cb8ab19-0564-4182-a7e3-0943c1480663-metrics-client-ca podName:1cb8ab19-0564-4182-a7e3-0943c1480663 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.784623971 +0000 UTC m=+7.837369900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/1cb8ab19-0564-4182-a7e3-0943c1480663-metrics-client-ca") pod "node-exporter-l9q9t" (UID: "1cb8ab19-0564-4182-a7e3-0943c1480663") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284635 30420 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284648 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls podName:5900a401-21c2-47f0-a921-47c648da558d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.784641902 +0000 UTC m=+7.837387831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-8tbkg" (UID: "5900a401-21c2-47f0-a921-47c648da558d") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284656 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284675 30420 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284646 30420 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284692 30420 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284683 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af1bbeee-1faf-43d1-943f-ee5319cef4e9-metrics-client-ca podName:af1bbeee-1faf-43d1-943f-ee5319cef4e9 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.784674113 +0000 UTC m=+7.837420042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/af1bbeee-1faf-43d1-943f-ee5319cef4e9-metrics-client-ca") pod "openshift-state-metrics-5dc6c74576-6rrn7" (UID: "af1bbeee-1faf-43d1-943f-ee5319cef4e9") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284725 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-serving-cert podName:0c7b317c-d141-4e69-9c82-4a5dda6c3248 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.784719094 +0000 UTC m=+7.837465023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-serving-cert") pod "apiserver-687747fbb4-k7dnf" (UID: "0c7b317c-d141-4e69-9c82-4a5dda6c3248") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284737 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-serving-ca podName:0c7b317c-d141-4e69-9c82-4a5dda6c3248 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.784731624 +0000 UTC m=+7.837477543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-serving-ca") pod "apiserver-687747fbb4-k7dnf" (UID: "0c7b317c-d141-4e69-9c82-4a5dda6c3248") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284747 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74476be5-669a-4737-b93b-c4870423a4da-cert podName:74476be5-669a-4737-b93b-c4870423a4da nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.784742774 +0000 UTC m=+7.837488703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/74476be5-669a-4737-b93b-c4870423a4da-cert") pod "ingress-canary-rzksb" (UID: "74476be5-669a-4737-b93b-c4870423a4da") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.284772 master-0 kubenswrapper[30420]: E0318 10:10:43.284760 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client podName:aa4cba67-b5d4-46c2-8cad-1a1379f764cb nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.784754685 +0000 UTC m=+7.837500614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client") pod "telemeter-client-585cb8cdb6-g2jjm" (UID: "aa4cba67-b5d4-46c2-8cad-1a1379f764cb") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.285989 master-0 kubenswrapper[30420]: E0318 10:10:43.285875 30420 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.285989 master-0 kubenswrapper[30420]: E0318 10:10:43.285928 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client-kube-rbac-proxy-config podName:aa4cba67-b5d4-46c2-8cad-1a1379f764cb nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.785913914 +0000 UTC m=+7.838659953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-585cb8cdb6-g2jjm" (UID: "aa4cba67-b5d4-46c2-8cad-1a1379f764cb") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.285989 master-0 kubenswrapper[30420]: E0318 10:10:43.285931 30420 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.285989 master-0 kubenswrapper[30420]: E0318 10:10:43.285933 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.285989 master-0 kubenswrapper[30420]: E0318 10:10:43.285968 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-apiservice-cert podName:bdf80ddc-7c99-4f60-814b-ba98809ef41d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.785959945 +0000 UTC m=+7.838705864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-apiservice-cert") pod "packageserver-7b64dcc66c-2vx58" (UID: "bdf80ddc-7c99-4f60-814b-ba98809ef41d") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:43.285989 master-0 kubenswrapper[30420]: E0318 10:10:43.285976 30420 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.285989 master-0 kubenswrapper[30420]: E0318 10:10:43.285980 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.286194 master-0 kubenswrapper[30420]: E0318 10:10:43.285990 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-metrics-client-ca podName:aa4cba67-b5d4-46c2-8cad-1a1379f764cb nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.785976505 +0000 UTC m=+7.838722514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-metrics-client-ca") pod "telemeter-client-585cb8cdb6-g2jjm" (UID: "aa4cba67-b5d4-46c2-8cad-1a1379f764cb") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.286194 master-0 kubenswrapper[30420]: E0318 10:10:43.286015 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-auth-proxy-config podName:8641c1d1-dd79-4f1f-9343-52d1ee6faf9f nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.786005816 +0000 UTC m=+7.838751855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-rpxn4" (UID: "8641c1d1-dd79-4f1f-9343-52d1ee6faf9f") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.286194 master-0 kubenswrapper[30420]: E0318 10:10:43.286032 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-serving-certs-ca-bundle podName:aa4cba67-b5d4-46c2-8cad-1a1379f764cb nodeName:}" failed. No retries permitted until 2026-03-18 10:10:43.786023936 +0000 UTC m=+7.838769995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-serving-certs-ca-bundle") pod "telemeter-client-585cb8cdb6-g2jjm" (UID: "aa4cba67-b5d4-46c2-8cad-1a1379f764cb") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:43.297441 master-0 kubenswrapper[30420]: I0318 10:10:43.297296 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 10:10:43.316169 master-0 kubenswrapper[30420]: I0318 10:10:43.315985 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-qcchq" Mar 18 10:10:43.336009 master-0 kubenswrapper[30420]: I0318 10:10:43.335848 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 10:10:43.356087 master-0 kubenswrapper[30420]: I0318 10:10:43.356034 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 10:10:43.375666 master-0 kubenswrapper[30420]: I0318 10:10:43.375595 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 10:10:43.395803 master-0 kubenswrapper[30420]: I0318 10:10:43.395757 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-p9m8v" Mar 18 10:10:43.416900 master-0 kubenswrapper[30420]: I0318 10:10:43.416738 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 10:10:43.441383 master-0 kubenswrapper[30420]: I0318 10:10:43.441324 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 10:10:43.458161 master-0 kubenswrapper[30420]: I0318 10:10:43.457028 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 10:10:43.478317 master-0 kubenswrapper[30420]: I0318 10:10:43.478240 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 10:10:43.496564 master-0 kubenswrapper[30420]: I0318 10:10:43.496500 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 10:10:43.517034 master-0 kubenswrapper[30420]: I0318 10:10:43.516955 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-t5rvh" Mar 18 10:10:43.536955 master-0 kubenswrapper[30420]: I0318 10:10:43.536579 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 10:10:43.556881 master-0 kubenswrapper[30420]: I0318 10:10:43.555683 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-vxxzb" Mar 18 10:10:43.576721 master-0 kubenswrapper[30420]: I0318 10:10:43.576646 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 10:10:43.596430 master-0 kubenswrapper[30420]: I0318 10:10:43.596368 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-wv86q" Mar 18 10:10:43.616516 master-0 kubenswrapper[30420]: I0318 10:10:43.616434 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 10:10:43.636078 master-0 kubenswrapper[30420]: I0318 10:10:43.636022 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-8lfl6" Mar 18 10:10:43.656109 master-0 kubenswrapper[30420]: I0318 10:10:43.656039 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 10:10:43.676301 master-0 kubenswrapper[30420]: I0318 10:10:43.676194 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 10:10:43.696867 master-0 kubenswrapper[30420]: I0318 10:10:43.696796 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-glndn" Mar 18 10:10:43.705967 master-0 kubenswrapper[30420]: I0318 10:10:43.705906 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-check-endpoints/0.log" Mar 18 10:10:43.709733 master-0 kubenswrapper[30420]: I0318 10:10:43.709670 30420 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="ceb0752eea3da310ec4f97706cc49b9e5802cdc6a08264ab2c0725b45c7967d0" exitCode=255 Mar 18 10:10:43.712747 master-0 kubenswrapper[30420]: I0318 10:10:43.712658 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" Mar 18 10:10:43.716589 master-0 kubenswrapper[30420]: I0318 10:10:43.716536 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 10:10:43.740169 master-0 kubenswrapper[30420]: I0318 10:10:43.739288 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 10:10:43.761127 master-0 kubenswrapper[30420]: I0318 10:10:43.761052 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 10:10:43.776136 master-0 kubenswrapper[30420]: I0318 10:10:43.776086 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 10:10:43.796487 master-0 kubenswrapper[30420]: I0318 10:10:43.796435 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 10:10:43.816323 master-0 kubenswrapper[30420]: I0318 10:10:43.816262 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 10:10:43.835781 master-0 kubenswrapper[30420]: I0318 10:10:43.835736 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-8wt5h" Mar 18 10:10:43.836148 master-0 kubenswrapper[30420]: I0318 10:10:43.836114 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-node-bootstrap-token\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 10:10:43.836226 master-0 kubenswrapper[30420]: I0318 10:10:43.836163 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-images\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:43.836226 master-0 kubenswrapper[30420]: I0318 10:10:43.836189 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:43.836226 master-0 kubenswrapper[30420]: I0318 10:10:43.836208 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-client\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:43.836226 master-0 kubenswrapper[30420]: I0318 10:10:43.836227 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-certs\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 10:10:43.836418 master-0 kubenswrapper[30420]: I0318 10:10:43.836254 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-image-import-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:43.836418 master-0 kubenswrapper[30420]: I0318 10:10:43.836284 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:43.836504 master-0 kubenswrapper[30420]: I0318 10:10:43.836444 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5e0836f-c0b4-40cd-9f63-55774da2740e-proxy-tls\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:43.836504 master-0 kubenswrapper[30420]: I0318 10:10:43.836494 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 10:10:43.836705 master-0 kubenswrapper[30420]: I0318 10:10:43.836663 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f875878f-3588-42f1-9488-750d9f4582f8-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:10:43.836771 master-0 kubenswrapper[30420]: I0318 10:10:43.836752 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-config\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:43.836818 master-0 kubenswrapper[30420]: I0318 10:10:43.836805 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d014721-ed53-447a-b737-c496bbba18be-proxy-tls\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 10:10:43.837036 master-0 kubenswrapper[30420]: I0318 10:10:43.837012 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-config\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:43.837109 master-0 kubenswrapper[30420]: I0318 10:10:43.837046 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5e0836f-c0b4-40cd-9f63-55774da2740e-proxy-tls\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:43.837155 master-0 kubenswrapper[30420]: I0318 10:10:43.837073 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:43.837281 master-0 kubenswrapper[30420]: I0318 10:10:43.837247 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:43.837338 master-0 kubenswrapper[30420]: I0318 10:10:43.837305 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:43.837384 master-0 kubenswrapper[30420]: I0318 10:10:43.837371 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:43.837424 master-0 kubenswrapper[30420]: I0318 10:10:43.837399 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-images\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:43.837424 master-0 kubenswrapper[30420]: I0318 10:10:43.837411 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:43.837556 master-0 kubenswrapper[30420]: I0318 10:10:43.837518 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-encryption-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:43.837634 master-0 kubenswrapper[30420]: I0318 10:10:43.837608 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:43.837789 master-0 kubenswrapper[30420]: I0318 10:10:43.837767 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-images\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:43.837929 master-0 kubenswrapper[30420]: I0318 10:10:43.837895 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-config\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:43.837983 master-0 kubenswrapper[30420]: I0318 10:10:43.837961 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:43.838025 master-0 kubenswrapper[30420]: I0318 10:10:43.837970 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/29fbc78b-1887-40d4-8165-f0f7cc40b583-images\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:43.838130 master-0 kubenswrapper[30420]: I0318 10:10:43.838105 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:43.838188 master-0 kubenswrapper[30420]: I0318 10:10:43.838144 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:43.838358 master-0 kubenswrapper[30420]: I0318 10:10:43.838313 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-trusted-ca-bundle\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:43.838417 master-0 kubenswrapper[30420]: I0318 10:10:43.838373 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:43.838460 master-0 kubenswrapper[30420]: I0318 10:10:43.838424 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:43.838505 master-0 kubenswrapper[30420]: I0318 10:10:43.838457 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:43.838550 master-0 kubenswrapper[30420]: I0318 10:10:43.838508 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:43.838599 master-0 kubenswrapper[30420]: I0318 10:10:43.838554 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:43.838649 master-0 kubenswrapper[30420]: I0318 10:10:43.838582 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:43.838775 master-0 kubenswrapper[30420]: I0318 10:10:43.838727 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:43.838848 master-0 kubenswrapper[30420]: I0318 10:10:43.838779 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:43.838892 master-0 kubenswrapper[30420]: I0318 10:10:43.838855 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71755097-7543-48f8-8925-0e21650bf8f6-serving-cert\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:43.839035 master-0 kubenswrapper[30420]: I0318 10:10:43.838899 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:43.839035 master-0 kubenswrapper[30420]: I0318 10:10:43.838924 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:43.839035 master-0 kubenswrapper[30420]: I0318 10:10:43.838980 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f5c64aa-676e-4e48-b714-02f6edb1d361-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 10:10:43.839035 master-0 kubenswrapper[30420]: I0318 10:10:43.839002 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cert\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:43.839035 master-0 kubenswrapper[30420]: I0318 10:10:43.839023 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/29fbc78b-1887-40d4-8165-f0f7cc40b583-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:43.839216 master-0 kubenswrapper[30420]: I0318 10:10:43.839050 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:43.839216 master-0 kubenswrapper[30420]: I0318 10:10:43.839069 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-federate-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:43.839216 master-0 kubenswrapper[30420]: I0318 10:10:43.839098 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 10:10:43.839216 master-0 kubenswrapper[30420]: I0318 10:10:43.839118 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:43.839216 master-0 kubenswrapper[30420]: I0318 10:10:43.839122 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:43.839216 master-0 kubenswrapper[30420]: I0318 10:10:43.839144 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-webhook-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:43.839430 master-0 kubenswrapper[30420]: I0318 10:10:43.839223 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:43.839430 master-0 kubenswrapper[30420]: I0318 10:10:43.839256 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:43.839430 master-0 kubenswrapper[30420]: I0318 10:10:43.839306 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:43.839430 master-0 kubenswrapper[30420]: I0318 10:10:43.839330 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-webhook-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:43.839430 master-0 kubenswrapper[30420]: I0318 10:10:43.839369 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:43.839430 master-0 kubenswrapper[30420]: I0318 10:10:43.839411 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9cfd2323-c33a-4d80-9c25-710920c0e605-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:43.839642 master-0 kubenswrapper[30420]: I0318 10:10:43.839479 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f5c64aa-676e-4e48-b714-02f6edb1d361-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 10:10:43.839642 master-0 kubenswrapper[30420]: I0318 10:10:43.839490 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:43.839642 master-0 kubenswrapper[30420]: I0318 10:10:43.839564 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 10:10:43.839642 master-0 kubenswrapper[30420]: I0318 10:10:43.839622 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:43.839789 master-0 kubenswrapper[30420]: I0318 10:10:43.839671 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/29fbc78b-1887-40d4-8165-f0f7cc40b583-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:43.839789 master-0 kubenswrapper[30420]: I0318 10:10:43.839689 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:43.839789 master-0 kubenswrapper[30420]: I0318 10:10:43.839717 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29490aed-9c97-42d1-94c8-44d1de13b70c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 10:10:43.839789 master-0 kubenswrapper[30420]: I0318 10:10:43.839783 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:43.839971 master-0 kubenswrapper[30420]: I0318 10:10:43.839868 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f88c2a18-11f5-45ef-aff1-3c5976716d85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 10:10:43.839971 master-0 kubenswrapper[30420]: I0318 10:10:43.839878 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/62b82d72-d73c-451a-84e1-551d73036aa8-iptables-alerter-script\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 10:10:43.839971 master-0 kubenswrapper[30420]: I0318 10:10:43.839897 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 10:10:43.839971 master-0 kubenswrapper[30420]: I0318 10:10:43.839933 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af1bbeee-1faf-43d1-943f-ee5319cef4e9-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:43.840118 master-0 kubenswrapper[30420]: I0318 10:10:43.839961 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74476be5-669a-4737-b93b-c4870423a4da-cert\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 10:10:43.840118 master-0 kubenswrapper[30420]: I0318 10:10:43.840032 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:43.840118 master-0 kubenswrapper[30420]: I0318 10:10:43.840072 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cb8ab19-0564-4182-a7e3-0943c1480663-metrics-client-ca\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:43.840118 master-0 kubenswrapper[30420]: I0318 10:10:43.840096 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-serving-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:43.840268 master-0 kubenswrapper[30420]: I0318 10:10:43.840122 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-serving-cert\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:43.840268 master-0 kubenswrapper[30420]: I0318 10:10:43.840142 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71755097-7543-48f8-8925-0e21650bf8f6-serving-cert\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:43.840268 master-0 kubenswrapper[30420]: I0318 10:10:43.840148 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:43.840268 master-0 kubenswrapper[30420]: I0318 10:10:43.840206 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:43.840268 master-0 kubenswrapper[30420]: I0318 10:10:43.840229 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:43.840503 master-0 kubenswrapper[30420]: I0318 10:10:43.840474 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-serving-certs-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:43.840562 master-0 kubenswrapper[30420]: I0318 10:10:43.840514 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-metrics-client-ca\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:43.840562 master-0 kubenswrapper[30420]: I0318 10:10:43.840549 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-apiservice-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:43.840662 master-0 kubenswrapper[30420]: I0318 10:10:43.840641 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:43.840715 master-0 kubenswrapper[30420]: I0318 10:10:43.840683 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:43.840759 master-0 kubenswrapper[30420]: I0318 10:10:43.840719 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:43.840805 master-0 kubenswrapper[30420]: I0318 10:10:43.840773 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71755097-7543-48f8-8925-0e21650bf8f6-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:43.841001 master-0 kubenswrapper[30420]: I0318 10:10:43.840551 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 10:10:43.841001 master-0 kubenswrapper[30420]: I0318 10:10:43.840990 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/29490aed-9c97-42d1-94c8-44d1de13b70c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 10:10:43.841185 master-0 kubenswrapper[30420]: I0318 10:10:43.841139 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdf80ddc-7c99-4f60-814b-ba98809ef41d-apiservice-cert\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:43.841241 master-0 kubenswrapper[30420]: I0318 10:10:43.841159 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f88c2a18-11f5-45ef-aff1-3c5976716d85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 10:10:43.858007 master-0 kubenswrapper[30420]: I0318 10:10:43.857933 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 10:10:43.859186 master-0 kubenswrapper[30420]: I0318 10:10:43.859150 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1084562a-20a0-432d-b739-90bc0a4daff2-config\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:43.877406 master-0 kubenswrapper[30420]: I0318 10:10:43.877346 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 10:10:43.881102 master-0 kubenswrapper[30420]: I0318 10:10:43.881062 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1084562a-20a0-432d-b739-90bc0a4daff2-cert\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:43.896535 master-0 kubenswrapper[30420]: I0318 10:10:43.896476 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 10:10:43.917299 master-0 kubenswrapper[30420]: I0318 10:10:43.917251 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 10:10:43.919127 master-0 kubenswrapper[30420]: I0318 10:10:43.919070 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bb942756-bac7-414d-b179-cebdce588a13-webhook-cert\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:43.936556 master-0 kubenswrapper[30420]: I0318 10:10:43.936453 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 10:10:43.938656 master-0 kubenswrapper[30420]: I0318 10:10:43.938597 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-env-overrides\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:43.955890 master-0 kubenswrapper[30420]: I0318 10:10:43.955849 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-g2rgj" Mar 18 10:10:43.976601 master-0 kubenswrapper[30420]: I0318 10:10:43.976499 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 10:10:43.978239 master-0 kubenswrapper[30420]: I0318 10:10:43.978170 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/bb942756-bac7-414d-b179-cebdce588a13-ovnkube-identity-cm\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:43.996714 master-0 kubenswrapper[30420]: I0318 10:10:43.996642 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 10:10:44.000764 master-0 kubenswrapper[30420]: I0318 10:10:44.000696 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:44.016415 master-0 kubenswrapper[30420]: I0318 10:10:44.016366 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 10:10:44.020851 master-0 kubenswrapper[30420]: I0318 10:10:44.020771 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:44.036654 master-0 kubenswrapper[30420]: I0318 10:10:44.036583 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 10:10:44.041195 master-0 kubenswrapper[30420]: I0318 10:10:44.041157 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-serving-cert\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:44.056682 master-0 kubenswrapper[30420]: I0318 10:10:44.056608 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 10:10:44.057735 master-0 kubenswrapper[30420]: I0318 10:10:44.057684 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:44.077499 master-0 kubenswrapper[30420]: I0318 10:10:44.077440 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 10:10:44.078932 master-0 kubenswrapper[30420]: I0318 10:10:44.078820 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-encryption-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:44.096411 master-0 kubenswrapper[30420]: I0318 10:10:44.096350 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 10:10:44.116990 master-0 kubenswrapper[30420]: I0318 10:10:44.116901 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 10:10:44.136346 master-0 kubenswrapper[30420]: I0318 10:10:44.136281 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 10:10:44.137420 master-0 kubenswrapper[30420]: I0318 10:10:44.137391 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-client\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:44.156078 master-0 kubenswrapper[30420]: I0318 10:10:44.156033 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-945t9" Mar 18 10:10:44.180533 master-0 kubenswrapper[30420]: I0318 10:10:44.180394 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 10:10:44.194665 master-0 kubenswrapper[30420]: I0318 10:10:44.194516 30420 request.go:700] Waited for 2.014740446s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Mar 18 10:10:44.196213 master-0 kubenswrapper[30420]: I0318 10:10:44.196149 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 10:10:44.197452 master-0 kubenswrapper[30420]: I0318 10:10:44.197391 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-certs\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 10:10:44.216643 master-0 kubenswrapper[30420]: I0318 10:10:44.216564 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 10:10:44.221623 master-0 kubenswrapper[30420]: I0318 10:10:44.221554 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74476be5-669a-4737-b93b-c4870423a4da-cert\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 10:10:44.237314 master-0 kubenswrapper[30420]: I0318 10:10:44.237246 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 10:10:44.259175 master-0 kubenswrapper[30420]: I0318 10:10:44.259108 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bncrc" Mar 18 10:10:44.277308 master-0 kubenswrapper[30420]: I0318 10:10:44.277258 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 10:10:44.287491 master-0 kubenswrapper[30420]: I0318 10:10:44.287392 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/196e7607-1ddf-467b-9901-b4be746130a1-node-bootstrap-token\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 10:10:44.296341 master-0 kubenswrapper[30420]: I0318 10:10:44.296287 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 10:10:44.297712 master-0 kubenswrapper[30420]: I0318 10:10:44.297646 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-image-import-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:44.315741 master-0 kubenswrapper[30420]: I0318 10:10:44.315669 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 10:10:44.316987 master-0 kubenswrapper[30420]: I0318 10:10:44.316955 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-audit\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:44.336563 master-0 kubenswrapper[30420]: I0318 10:10:44.336498 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 10:10:44.341129 master-0 kubenswrapper[30420]: I0318 10:10:44.341084 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-etcd-serving-ca\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:44.356439 master-0 kubenswrapper[30420]: I0318 10:10:44.356380 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 10:10:44.376973 master-0 kubenswrapper[30420]: I0318 10:10:44.376908 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 10:10:44.380373 master-0 kubenswrapper[30420]: I0318 10:10:44.380302 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-config\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:44.402784 master-0 kubenswrapper[30420]: I0318 10:10:44.402722 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 10:10:44.409685 master-0 kubenswrapper[30420]: I0318 10:10:44.409635 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7b317c-d141-4e69-9c82-4a5dda6c3248-trusted-ca-bundle\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:44.415747 master-0 kubenswrapper[30420]: I0318 10:10:44.415714 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 10:10:44.436767 master-0 kubenswrapper[30420]: I0318 10:10:44.436685 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 10:10:44.437374 master-0 kubenswrapper[30420]: I0318 10:10:44.437312 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f875878f-3588-42f1-9488-750d9f4582f8-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:10:44.456041 master-0 kubenswrapper[30420]: I0318 10:10:44.455891 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 10:10:44.460753 master-0 kubenswrapper[30420]: I0318 10:10:44.460693 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/af1bbeee-1faf-43d1-943f-ee5319cef4e9-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:44.460753 master-0 kubenswrapper[30420]: I0318 10:10:44.460731 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:44.461066 master-0 kubenswrapper[30420]: I0318 10:10:44.460815 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9cfd2323-c33a-4d80-9c25-710920c0e605-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:44.461664 master-0 kubenswrapper[30420]: I0318 10:10:44.461607 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-metrics-client-ca\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:44.468748 master-0 kubenswrapper[30420]: I0318 10:10:44.468673 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cb8ab19-0564-4182-a7e3-0943c1480663-metrics-client-ca\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:44.476621 master-0 kubenswrapper[30420]: I0318 10:10:44.475990 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 10:10:44.479627 master-0 kubenswrapper[30420]: I0318 10:10:44.479549 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:44.496670 master-0 kubenswrapper[30420]: I0318 10:10:44.496582 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-mkddq" Mar 18 10:10:44.515665 master-0 kubenswrapper[30420]: I0318 10:10:44.515589 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bs6wb" Mar 18 10:10:44.536810 master-0 kubenswrapper[30420]: I0318 10:10:44.536675 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 10:10:44.538968 master-0 kubenswrapper[30420]: I0318 10:10:44.538915 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:44.559346 master-0 kubenswrapper[30420]: I0318 10:10:44.559167 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 10:10:44.563173 master-0 kubenswrapper[30420]: I0318 10:10:44.563120 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9cfd2323-c33a-4d80-9c25-710920c0e605-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:44.576455 master-0 kubenswrapper[30420]: I0318 10:10:44.576382 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-ht56j" Mar 18 10:10:44.598642 master-0 kubenswrapper[30420]: I0318 10:10:44.598559 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 10:10:44.599036 master-0 kubenswrapper[30420]: I0318 10:10:44.598991 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/af1bbeee-1faf-43d1-943f-ee5319cef4e9-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:44.617059 master-0 kubenswrapper[30420]: I0318 10:10:44.616994 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 10:10:44.621168 master-0 kubenswrapper[30420]: I0318 10:10:44.621124 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:44.636804 master-0 kubenswrapper[30420]: I0318 10:10:44.636725 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 10:10:44.641886 master-0 kubenswrapper[30420]: I0318 10:10:44.641804 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-serving-certs-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:44.656879 master-0 kubenswrapper[30420]: I0318 10:10:44.656787 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 10:10:44.676722 master-0 kubenswrapper[30420]: I0318 10:10:44.676618 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 10:10:44.681374 master-0 kubenswrapper[30420]: I0318 10:10:44.681299 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:44.696960 master-0 kubenswrapper[30420]: I0318 10:10:44.696906 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 10:10:44.697954 master-0 kubenswrapper[30420]: I0318 10:10:44.697907 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:44.716471 master-0 kubenswrapper[30420]: I0318 10:10:44.716353 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 10:10:44.721135 master-0 kubenswrapper[30420]: I0318 10:10:44.721078 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:44.737003 master-0 kubenswrapper[30420]: I0318 10:10:44.736911 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-4lcwf" Mar 18 10:10:44.756295 master-0 kubenswrapper[30420]: I0318 10:10:44.756213 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 10:10:44.760716 master-0 kubenswrapper[30420]: I0318 10:10:44.760661 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-federate-client-tls\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:44.777232 master-0 kubenswrapper[30420]: I0318 10:10:44.777155 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 10:10:44.782344 master-0 kubenswrapper[30420]: I0318 10:10:44.782297 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:44.796459 master-0 kubenswrapper[30420]: I0318 10:10:44.796415 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-gxfn6" Mar 18 10:10:44.818261 master-0 kubenswrapper[30420]: I0318 10:10:44.818183 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 10:10:44.820867 master-0 kubenswrapper[30420]: I0318 10:10:44.820793 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-secret-telemeter-client\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:44.837340 master-0 kubenswrapper[30420]: E0318 10:10:44.837277 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.837513 master-0 kubenswrapper[30420]: E0318 10:10:44.837378 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.837359575 +0000 UTC m=+9.890105514 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.837513 master-0 kubenswrapper[30420]: E0318 10:10:44.837472 30420 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.837701 master-0 kubenswrapper[30420]: E0318 10:10:44.837620 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca podName:8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.83755393 +0000 UTC m=+9.890299899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca") pod "route-controller-manager-5657df7dd8-4pp68" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.837701 master-0 kubenswrapper[30420]: E0318 10:10:44.837664 30420 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.837949 master-0 kubenswrapper[30420]: E0318 10:10:44.837709 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert podName:9fc664ff-2e8f-441d-82dc-8f21c1d362d7 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.837694243 +0000 UTC m=+9.890440212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert") pod "controller-manager-6c87d45bb4-vxcx9" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.837949 master-0 kubenswrapper[30420]: E0318 10:10:44.837799 30420 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.838085 master-0 kubenswrapper[30420]: E0318 10:10:44.837954 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config podName:8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.837930799 +0000 UTC m=+9.890676738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config") pod "route-controller-manager-5657df7dd8-4pp68" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.838625 master-0 kubenswrapper[30420]: E0318 10:10:44.838575 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.838737 master-0 kubenswrapper[30420]: E0318 10:10:44.838641 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.838627177 +0000 UTC m=+9.891373116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.840199 master-0 kubenswrapper[30420]: E0318 10:10:44.840155 30420 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.840304 master-0 kubenswrapper[30420]: E0318 10:10:44.840219 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle podName:aa4cba67-b5d4-46c2-8cad-1a1379f764cb nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.840203386 +0000 UTC m=+9.892949335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle") pod "telemeter-client-585cb8cdb6-g2jjm" (UID: "aa4cba67-b5d4-46c2-8cad-1a1379f764cb") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.840304 master-0 kubenswrapper[30420]: E0318 10:10:44.840222 30420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840304 master-0 kubenswrapper[30420]: E0318 10:10:44.840274 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.840260738 +0000 UTC m=+9.893006667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840496 master-0 kubenswrapper[30420]: E0318 10:10:44.840319 30420 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840496 master-0 kubenswrapper[30420]: E0318 10:10:44.840341 30420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840496 master-0 kubenswrapper[30420]: E0318 10:10:44.840371 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert podName:8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.84035711 +0000 UTC m=+9.893103119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert") pod "route-controller-manager-5657df7dd8-4pp68" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840496 master-0 kubenswrapper[30420]: E0318 10:10:44.840374 30420 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840496 master-0 kubenswrapper[30420]: E0318 10:10:44.840394 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.840383921 +0000 UTC m=+9.893129960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840496 master-0 kubenswrapper[30420]: E0318 10:10:44.840424 30420 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840496 master-0 kubenswrapper[30420]: E0318 10:10:44.840447 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config podName:5900a401-21c2-47f0-a921-47c648da558d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.840437342 +0000 UTC m=+9.893183411 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-8tbkg" (UID: "5900a401-21c2-47f0-a921-47c648da558d") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840496 master-0 kubenswrapper[30420]: E0318 10:10:44.840496 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls podName:1cb8ab19-0564-4182-a7e3-0943c1480663 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.840477543 +0000 UTC m=+9.893223512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls") pod "node-exporter-l9q9t" (UID: "1cb8ab19-0564-4182-a7e3-0943c1480663") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840993 master-0 kubenswrapper[30420]: E0318 10:10:44.840500 30420 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.840993 master-0 kubenswrapper[30420]: E0318 10:10:44.840596 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles podName:9fc664ff-2e8f-441d-82dc-8f21c1d362d7 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.840573165 +0000 UTC m=+9.893319134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles") pod "controller-manager-6c87d45bb4-vxcx9" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.840993 master-0 kubenswrapper[30420]: E0318 10:10:44.840699 30420 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840993 master-0 kubenswrapper[30420]: E0318 10:10:44.840728 30420 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840993 master-0 kubenswrapper[30420]: E0318 10:10:44.840759 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert podName:9f5c64aa-676e-4e48-b714-02f6edb1d361 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.84074713 +0000 UTC m=+9.893493189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert") pod "cluster-autoscaler-operator-866dc4744-mw9tt" (UID: "9f5c64aa-676e-4e48-b714-02f6edb1d361") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.840993 master-0 kubenswrapper[30420]: E0318 10:10:44.840786 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls podName:5900a401-21c2-47f0-a921-47c648da558d nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.840775751 +0000 UTC m=+9.893521800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-8tbkg" (UID: "5900a401-21c2-47f0-a921-47c648da558d") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.841861 master-0 kubenswrapper[30420]: E0318 10:10:44.841800 30420 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.842087 master-0 kubenswrapper[30420]: E0318 10:10:44.841860 30420 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-113q5nsjog6km: failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.842087 master-0 kubenswrapper[30420]: E0318 10:10:44.841870 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca podName:9fc664ff-2e8f-441d-82dc-8f21c1d362d7 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.841853208 +0000 UTC m=+9.894599237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca") pod "controller-manager-6c87d45bb4-vxcx9" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 10:10:44.842087 master-0 kubenswrapper[30420]: E0318 10:10:44.841938 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle podName:106fc2a2-9e7b-4f86-94b8-b1a1906646d8 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:45.841921809 +0000 UTC m=+9.894667768 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle") pod "metrics-server-74c475bc87-xx98m" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8") : failed to sync secret cache: timed out waiting for the condition Mar 18 10:10:44.843592 master-0 kubenswrapper[30420]: I0318 10:10:44.843541 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 10:10:44.856302 master-0 kubenswrapper[30420]: I0318 10:10:44.856212 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 10:10:44.877215 master-0 kubenswrapper[30420]: I0318 10:10:44.877111 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 10:10:44.896796 master-0 kubenswrapper[30420]: I0318 10:10:44.896714 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-pvxkh" Mar 18 10:10:44.916183 master-0 kubenswrapper[30420]: I0318 10:10:44.916091 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 10:10:44.936184 master-0 kubenswrapper[30420]: I0318 10:10:44.936144 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-czmbt" Mar 18 10:10:44.956759 master-0 kubenswrapper[30420]: I0318 10:10:44.956712 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 10:10:44.976871 master-0 kubenswrapper[30420]: I0318 10:10:44.976565 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 10:10:45.003809 master-0 kubenswrapper[30420]: I0318 10:10:45.003747 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 10:10:45.015710 master-0 kubenswrapper[30420]: I0318 10:10:45.015658 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 10:10:45.036596 master-0 kubenswrapper[30420]: I0318 10:10:45.036512 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-c5mc5" Mar 18 10:10:45.056848 master-0 kubenswrapper[30420]: I0318 10:10:45.056766 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 10:10:45.076771 master-0 kubenswrapper[30420]: I0318 10:10:45.076675 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 10:10:45.097208 master-0 kubenswrapper[30420]: I0318 10:10:45.097120 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 10:10:45.117559 master-0 kubenswrapper[30420]: I0318 10:10:45.117471 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 10:10:45.136220 master-0 kubenswrapper[30420]: I0318 10:10:45.136157 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 10:10:45.156925 master-0 kubenswrapper[30420]: I0318 10:10:45.156870 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 10:10:45.177895 master-0 kubenswrapper[30420]: I0318 10:10:45.177760 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 10:10:45.195087 master-0 kubenswrapper[30420]: I0318 10:10:45.195000 30420 request.go:700] Waited for 2.998778488s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dmetrics-client-certs&limit=500&resourceVersion=0 Mar 18 10:10:45.197448 master-0 kubenswrapper[30420]: I0318 10:10:45.197410 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 10:10:45.217541 master-0 kubenswrapper[30420]: I0318 10:10:45.217464 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 10:10:45.237406 master-0 kubenswrapper[30420]: I0318 10:10:45.237276 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-113q5nsjog6km" Mar 18 10:10:45.257060 master-0 kubenswrapper[30420]: I0318 10:10:45.257020 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 10:10:45.276736 master-0 kubenswrapper[30420]: I0318 10:10:45.276665 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-t4btg" Mar 18 10:10:45.335985 master-0 kubenswrapper[30420]: E0318 10:10:45.333131 30420 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.151s" Mar 18 10:10:45.335985 master-0 kubenswrapper[30420]: I0318 10:10:45.333232 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:45.349888 master-0 kubenswrapper[30420]: I0318 10:10:45.349846 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 10:10:45.361481 master-0 kubenswrapper[30420]: I0318 10:10:45.361407 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5dk8\" (UniqueName: \"kubernetes.io/projected/3646e0cd-49c9-4a98-a2e3-efe9359cc6c4-kube-api-access-p5dk8\") pod \"openshift-controller-manager-operator-8c94f4649-g25jq\" (UID: \"3646e0cd-49c9-4a98-a2e3-efe9359cc6c4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-g25jq" Mar 18 10:10:45.377125 master-0 kubenswrapper[30420]: I0318 10:10:45.377068 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fjk8\" (UniqueName: \"kubernetes.io/projected/0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480-kube-api-access-9fjk8\") pod \"openshift-config-operator-95bf4f4d-495pg\" (UID: \"0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:10:45.397722 master-0 kubenswrapper[30420]: I0318 10:10:45.397651 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpk5h\" (UniqueName: \"kubernetes.io/projected/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-kube-api-access-gpk5h\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:45.414616 master-0 kubenswrapper[30420]: I0318 10:10:45.414552 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxl7x\" (UniqueName: \"kubernetes.io/projected/0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a-kube-api-access-kxl7x\") pod \"catalogd-controller-manager-6864dc98f7-nq7mw\" (UID: \"0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:45.431813 master-0 kubenswrapper[30420]: I0318 10:10:45.431754 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2chb\" (UniqueName: \"kubernetes.io/projected/8cb5158f-2199-42c0-995a-8490c9ec8a95-kube-api-access-p2chb\") pod \"dns-operator-9c5679d8f-jrmkr\" (UID: \"8cb5158f-2199-42c0-995a-8490c9ec8a95\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-jrmkr" Mar 18 10:10:45.451290 master-0 kubenswrapper[30420]: I0318 10:10:45.451207 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/d4d2218c-f9df-4d43-8727-ed3a920e23f7-kube-api-access-w4qp9\") pod \"package-server-manager-7b95f86987-r8fkv\" (UID: \"d4d2218c-f9df-4d43-8727-ed3a920e23f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 10:10:45.475622 master-0 kubenswrapper[30420]: I0318 10:10:45.475561 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z459j\" (UniqueName: \"kubernetes.io/projected/43d54514-989c-4c82-93f9-153b44eacdd1-kube-api-access-z459j\") pod \"router-default-7dcf5569b5-82tbk\" (UID: \"43d54514-989c-4c82-93f9-153b44eacdd1\") " pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:45.492128 master-0 kubenswrapper[30420]: I0318 10:10:45.491787 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtnxf\" (UniqueName: \"kubernetes.io/projected/5900a401-21c2-47f0-a921-47c648da558d-kube-api-access-qtnxf\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:45.511916 master-0 kubenswrapper[30420]: I0318 10:10:45.511810 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq4rm\" (UniqueName: \"kubernetes.io/projected/da04c6fa-4916-4bed-a6b2-cc92bf2ee379-kube-api-access-vq4rm\") pod \"dns-default-z9sf5\" (UID: \"da04c6fa-4916-4bed-a6b2-cc92bf2ee379\") " pod="openshift-dns/dns-default-z9sf5" Mar 18 10:10:45.533910 master-0 kubenswrapper[30420]: I0318 10:10:45.533868 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhzg4\" (UniqueName: \"kubernetes.io/projected/d26036f1-bdce-4ec5-873f-962fa7e8e6c1-kube-api-access-lhzg4\") pod \"cluster-olm-operator-67dcd4998-mrc8q\" (UID: \"d26036f1-bdce-4ec5-873f-962fa7e8e6c1\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-mrc8q" Mar 18 10:10:45.561261 master-0 kubenswrapper[30420]: I0318 10:10:45.561191 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx9p2\" (UniqueName: \"kubernetes.io/projected/db52ca42-e458-407f-9eeb-bf6de6405edc-kube-api-access-jx9p2\") pod \"olm-operator-5c9796789-hc74k\" (UID: \"db52ca42-e458-407f-9eeb-bf6de6405edc\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 10:10:45.573843 master-0 kubenswrapper[30420]: I0318 10:10:45.573781 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvx6m\" (UniqueName: \"kubernetes.io/projected/74476be5-669a-4737-b93b-c4870423a4da-kube-api-access-nvx6m\") pod \"ingress-canary-rzksb\" (UID: \"74476be5-669a-4737-b93b-c4870423a4da\") " pod="openshift-ingress-canary/ingress-canary-rzksb" Mar 18 10:10:45.590113 master-0 kubenswrapper[30420]: I0318 10:10:45.590068 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4g9s\" (UniqueName: \"kubernetes.io/projected/196e7607-1ddf-467b-9901-b4be746130a1-kube-api-access-l4g9s\") pod \"machine-config-server-9wnkm\" (UID: \"196e7607-1ddf-467b-9901-b4be746130a1\") " pod="openshift-machine-config-operator/machine-config-server-9wnkm" Mar 18 10:10:45.612012 master-0 kubenswrapper[30420]: I0318 10:10:45.611977 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0999f781-3299-4cb6-ba76-2a4f4584c685-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-pzqqc\" (UID: \"0999f781-3299-4cb6-ba76-2a4f4584c685\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-pzqqc" Mar 18 10:10:45.632710 master-0 kubenswrapper[30420]: I0318 10:10:45.632664 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5j9d\" (UniqueName: \"kubernetes.io/projected/b9c87410-8689-4884-b5a8-df3ecbb7f1a4-kube-api-access-l5j9d\") pod \"certified-operators-pdfn6\" (UID: \"b9c87410-8689-4884-b5a8-df3ecbb7f1a4\") " pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:10:45.652649 master-0 kubenswrapper[30420]: I0318 10:10:45.652603 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ktpl\" (UniqueName: \"kubernetes.io/projected/bb942756-bac7-414d-b179-cebdce588a13-kube-api-access-2ktpl\") pod \"network-node-identity-7fl4x\" (UID: \"bb942756-bac7-414d-b179-cebdce588a13\") " pod="openshift-network-node-identity/network-node-identity-7fl4x" Mar 18 10:10:45.680884 master-0 kubenswrapper[30420]: I0318 10:10:45.680812 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94j4\" (UniqueName: \"kubernetes.io/projected/e5e0836f-c0b4-40cd-9f63-55774da2740e-kube-api-access-k94j4\") pod \"machine-config-daemon-mtdk2\" (UID: \"e5e0836f-c0b4-40cd-9f63-55774da2740e\") " pod="openshift-machine-config-operator/machine-config-daemon-mtdk2" Mar 18 10:10:45.700979 master-0 kubenswrapper[30420]: I0318 10:10:45.700932 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6qn5\" (UniqueName: \"kubernetes.io/projected/db376fea-5756-4bc2-9685-f32730b5a6f7-kube-api-access-r6qn5\") pod \"community-operators-nzqck\" (UID: \"db376fea-5756-4bc2-9685-f32730b5a6f7\") " pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:45.723433 master-0 kubenswrapper[30420]: I0318 10:10:45.723356 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t77j8\" (UniqueName: \"kubernetes.io/projected/b0f77d68-f228-4f82-befb-fb2a2ce2e976-kube-api-access-t77j8\") pod \"tuned-6rhgt\" (UID: \"b0f77d68-f228-4f82-befb-fb2a2ce2e976\") " pod="openshift-cluster-node-tuning-operator/tuned-6rhgt" Mar 18 10:10:45.735937 master-0 kubenswrapper[30420]: I0318 10:10:45.735816 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59hld\" (UniqueName: \"kubernetes.io/projected/e8d3cf68-ed97-45b9-8c83-b42bb1f789fc-kube-api-access-59hld\") pod \"node-resolver-hjpz8\" (UID: \"e8d3cf68-ed97-45b9-8c83-b42bb1f789fc\") " pod="openshift-dns/node-resolver-hjpz8" Mar 18 10:10:45.750754 master-0 kubenswrapper[30420]: I0318 10:10:45.750649 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x6ht\" (UniqueName: \"kubernetes.io/projected/0442ec6c-5973-40a5-a0c3-dc02de46d343-kube-api-access-5x6ht\") pod \"network-metrics-daemon-tbxt4\" (UID: \"0442ec6c-5973-40a5-a0c3-dc02de46d343\") " pod="openshift-multus/network-metrics-daemon-tbxt4" Mar 18 10:10:45.772173 master-0 kubenswrapper[30420]: I0318 10:10:45.772125 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghd2r\" (UniqueName: \"kubernetes.io/projected/9ccdc221-4ec5-487e-8ec4-85284ed628d8-kube-api-access-ghd2r\") pod \"network-operator-7bd846bfc4-8srnz\" (UID: \"9ccdc221-4ec5-487e-8ec4-85284ed628d8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-8srnz" Mar 18 10:10:45.791857 master-0 kubenswrapper[30420]: I0318 10:10:45.791768 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmsjt\" (UniqueName: \"kubernetes.io/projected/1084562a-20a0-432d-b739-90bc0a4daff2-kube-api-access-qmsjt\") pod \"cluster-baremetal-operator-6f69995874-lnq7l\" (UID: \"1084562a-20a0-432d-b739-90bc0a4daff2\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-lnq7l" Mar 18 10:10:45.811966 master-0 kubenswrapper[30420]: I0318 10:10:45.811915 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb7tz\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-kube-api-access-tb7tz\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 10:10:45.833203 master-0 kubenswrapper[30420]: I0318 10:10:45.833152 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7qd\" (UniqueName: \"kubernetes.io/projected/f69a00b6-d908-4485-bb0d-57594fc01d24-kube-api-access-5r7qd\") pod \"cluster-monitoring-operator-58845fbb57-8kx9m\" (UID: \"f69a00b6-d908-4485-bb0d-57594fc01d24\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8kx9m" Mar 18 10:10:45.852888 master-0 kubenswrapper[30420]: I0318 10:10:45.852508 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwfph\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-kube-api-access-nwfph\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 10:10:45.870597 master-0 kubenswrapper[30420]: I0318 10:10:45.870537 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scz6j\" (UniqueName: \"kubernetes.io/projected/f88c2a18-11f5-45ef-aff1-3c5976716d85-kube-api-access-scz6j\") pod \"control-plane-machine-set-operator-6f97756bc8-zcm5j\" (UID: \"f88c2a18-11f5-45ef-aff1-3c5976716d85\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zcm5j" Mar 18 10:10:45.891792 master-0 kubenswrapper[30420]: I0318 10:10:45.891745 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shbrj\" (UniqueName: \"kubernetes.io/projected/6f266bad-8b30-4300-ad93-9d48e61f2440-kube-api-access-shbrj\") pod \"marketplace-operator-89ccd998f-2glpv\" (UID: \"6f266bad-8b30-4300-ad93-9d48e61f2440\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:10:45.900615 master-0 kubenswrapper[30420]: I0318 10:10:45.900405 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.900615 master-0 kubenswrapper[30420]: I0318 10:10:45.900461 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:45.900615 master-0 kubenswrapper[30420]: I0318 10:10:45.900546 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:45.900615 master-0 kubenswrapper[30420]: I0318 10:10:45.900573 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:45.901089 master-0 kubenswrapper[30420]: I0318 10:10:45.901035 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:45.901160 master-0 kubenswrapper[30420]: I0318 10:10:45.901097 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.901269 master-0 kubenswrapper[30420]: I0318 10:10:45.901215 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:45.901320 master-0 kubenswrapper[30420]: I0318 10:10:45.901298 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.901925 master-0 kubenswrapper[30420]: I0318 10:10:45.901453 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:45.901925 master-0 kubenswrapper[30420]: I0318 10:10:45.901573 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.901925 master-0 kubenswrapper[30420]: I0318 10:10:45.901581 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:45.901925 master-0 kubenswrapper[30420]: I0318 10:10:45.901661 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:45.901925 master-0 kubenswrapper[30420]: I0318 10:10:45.901675 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1cb8ab19-0564-4182-a7e3-0943c1480663-node-exporter-tls\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:45.901925 master-0 kubenswrapper[30420]: I0318 10:10:45.901740 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.901925 master-0 kubenswrapper[30420]: I0318 10:10:45.901745 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 10:10:45.901925 master-0 kubenswrapper[30420]: I0318 10:10:45.901869 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:45.902248 master-0 kubenswrapper[30420]: I0318 10:10:45.901944 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.902248 master-0 kubenswrapper[30420]: I0318 10:10:45.901953 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:45.902248 master-0 kubenswrapper[30420]: I0318 10:10:45.902100 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:45.902248 master-0 kubenswrapper[30420]: I0318 10:10:45.902148 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.902248 master-0 kubenswrapper[30420]: I0318 10:10:45.902235 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.902431 master-0 kubenswrapper[30420]: I0318 10:10:45.902272 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5900a401-21c2-47f0-a921-47c648da558d-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-8tbkg\" (UID: \"5900a401-21c2-47f0-a921-47c648da558d\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-8tbkg" Mar 18 10:10:45.902431 master-0 kubenswrapper[30420]: I0318 10:10:45.902325 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:45.902431 master-0 kubenswrapper[30420]: I0318 10:10:45.902353 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:45.902431 master-0 kubenswrapper[30420]: I0318 10:10:45.902380 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:45.903019 master-0 kubenswrapper[30420]: I0318 10:10:45.902672 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:45.903019 master-0 kubenswrapper[30420]: I0318 10:10:45.902717 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f5c64aa-676e-4e48-b714-02f6edb1d361-cert\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 10:10:45.903019 master-0 kubenswrapper[30420]: I0318 10:10:45.902742 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.903019 master-0 kubenswrapper[30420]: I0318 10:10:45.902943 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca\") pod \"route-controller-manager-5657df7dd8-4pp68\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:45.903206 master-0 kubenswrapper[30420]: I0318 10:10:45.903075 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:45.903247 master-0 kubenswrapper[30420]: I0318 10:10:45.903235 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:45.903363 master-0 kubenswrapper[30420]: I0318 10:10:45.903323 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:45.912095 master-0 kubenswrapper[30420]: I0318 10:10:45.912046 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmnjp\" (UniqueName: \"kubernetes.io/projected/ce65f61f-8e3a-47d5-ac12-ad4ab05d2850-kube-api-access-jmnjp\") pod \"cluster-samples-operator-85f7577d78-5dg2r\" (UID: \"ce65f61f-8e3a-47d5-ac12-ad4ab05d2850\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-5dg2r" Mar 18 10:10:45.930324 master-0 kubenswrapper[30420]: I0318 10:10:45.930237 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvnrf\" (UniqueName: \"kubernetes.io/projected/62b82d72-d73c-451a-84e1-551d73036aa8-kube-api-access-lvnrf\") pod \"iptables-alerter-r7h65\" (UID: \"62b82d72-d73c-451a-84e1-551d73036aa8\") " pod="openshift-network-operator/iptables-alerter-r7h65" Mar 18 10:10:45.950771 master-0 kubenswrapper[30420]: I0318 10:10:45.950707 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bql7p\" (UniqueName: \"kubernetes.io/projected/bdf80ddc-7c99-4f60-814b-ba98809ef41d-kube-api-access-bql7p\") pod \"packageserver-7b64dcc66c-2vx58\" (UID: \"bdf80ddc-7c99-4f60-814b-ba98809ef41d\") " pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:45.971573 master-0 kubenswrapper[30420]: I0318 10:10:45.971482 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257hk\" (UniqueName: \"kubernetes.io/projected/29490aed-9c97-42d1-94c8-44d1de13b70c-kube-api-access-257hk\") pod \"cluster-storage-operator-7d87854d6-4kr54\" (UID: \"29490aed-9c97-42d1-94c8-44d1de13b70c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-4kr54" Mar 18 10:10:45.991991 master-0 kubenswrapper[30420]: I0318 10:10:45.991918 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xml27\" (UniqueName: \"kubernetes.io/projected/caec44dc-aab7-4407-b34a-52bbe4b4f635-kube-api-access-xml27\") pod \"cloud-credential-operator-744f9dbf77-rtnkl\" (UID: \"caec44dc-aab7-4407-b34a-52bbe4b4f635\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-rtnkl" Mar 18 10:10:46.023529 master-0 kubenswrapper[30420]: I0318 10:10:46.023335 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s54f9\" (UniqueName: \"kubernetes.io/projected/8e812dd9-cd05-4e9e-8710-d0920181ece2-kube-api-access-s54f9\") pod \"csi-snapshot-controller-operator-5f5d689c6b-mqbmq\" (UID: \"8e812dd9-cd05-4e9e-8710-d0920181ece2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-mqbmq" Mar 18 10:10:46.042380 master-0 kubenswrapper[30420]: I0318 10:10:46.042300 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d89r9\" (UniqueName: \"kubernetes.io/projected/8641c1d1-dd79-4f1f-9343-52d1ee6faf9f-kube-api-access-d89r9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-rpxn4\" (UID: \"8641c1d1-dd79-4f1f-9343-52d1ee6faf9f\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-rpxn4" Mar 18 10:10:46.063421 master-0 kubenswrapper[30420]: I0318 10:10:46.063351 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv8x5\" (UniqueName: \"kubernetes.io/projected/932a70df-3afe-4873-9449-ab6e061d3fe3-kube-api-access-fv8x5\") pod \"csi-snapshot-controller-64854d9cff-2l6cq\" (UID: \"932a70df-3afe-4873-9449-ab6e061d3fe3\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-2l6cq" Mar 18 10:10:46.072408 master-0 kubenswrapper[30420]: I0318 10:10:46.072337 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlxfz\" (UniqueName: \"kubernetes.io/projected/bb35841e-d992-4044-aaaa-06c9faf47bd0-kube-api-access-zlxfz\") pod \"service-ca-operator-b865698dc-pgtbr\" (UID: \"bb35841e-d992-4044-aaaa-06c9faf47bd0\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pgtbr" Mar 18 10:10:46.102885 master-0 kubenswrapper[30420]: I0318 10:10:46.102783 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hww8g\" (UniqueName: \"kubernetes.io/projected/8126b78e-d1e4-4de7-a71d-ebc9fa0afdae-kube-api-access-hww8g\") pod \"migrator-8487694857-8tqwj\" (UID: \"8126b78e-d1e4-4de7-a71d-ebc9fa0afdae\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8tqwj" Mar 18 10:10:46.112910 master-0 kubenswrapper[30420]: I0318 10:10:46.112812 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4hfd\" (UniqueName: \"kubernetes.io/projected/c2635254-a491-42e5-b598-461c24bf77ca-kube-api-access-p4hfd\") pod \"cluster-node-tuning-operator-598fbc5f8f-s7rm6\" (UID: \"c2635254-a491-42e5-b598-461c24bf77ca\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-s7rm6" Mar 18 10:10:46.130101 master-0 kubenswrapper[30420]: I0318 10:10:46.130017 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-549bq\" (UniqueName: \"kubernetes.io/projected/0c7b317c-d141-4e69-9c82-4a5dda6c3248-kube-api-access-549bq\") pod \"apiserver-687747fbb4-k7dnf\" (UID: \"0c7b317c-d141-4e69-9c82-4a5dda6c3248\") " pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:46.152399 master-0 kubenswrapper[30420]: I0318 10:10:46.152305 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9sq\" (UniqueName: \"kubernetes.io/projected/ec53d7fa-445b-4e1d-84ef-545f08e80ccc-kube-api-access-wj9sq\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-lk698\" (UID: \"ec53d7fa-445b-4e1d-84ef-545f08e80ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-lk698" Mar 18 10:10:46.169132 master-0 kubenswrapper[30420]: I0318 10:10:46.169068 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6bvr\" (UniqueName: \"kubernetes.io/projected/0d72e695-0183-4ee8-8add-5425e67f7138-kube-api-access-g6bvr\") pod \"openshift-apiserver-operator-d65958b8-zz68c\" (UID: \"0d72e695-0183-4ee8-8add-5425e67f7138\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-zz68c" Mar 18 10:10:46.188759 master-0 kubenswrapper[30420]: I0318 10:10:46.188687 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/a078565a-6970-4f42-84f4-938f1d637245-kube-api-access-cxv6v\") pod \"etcd-operator-8544cbcf9c-4tlnm\" (UID: \"a078565a-6970-4f42-84f4-938f1d637245\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-4tlnm" Mar 18 10:10:46.195170 master-0 kubenswrapper[30420]: I0318 10:10:46.195122 30420 request.go:700] Waited for 3.916704518s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Mar 18 10:10:46.213639 master-0 kubenswrapper[30420]: I0318 10:10:46.213563 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ee99294-4785-49d0-b493-0d734cf09396-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-f4f7m\" (UID: \"8ee99294-4785-49d0-b493-0d734cf09396\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-f4f7m" Mar 18 10:10:46.239416 master-0 kubenswrapper[30420]: I0318 10:10:46.239366 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v8jq\" (UniqueName: \"kubernetes.io/projected/1cb8ab19-0564-4182-a7e3-0943c1480663-kube-api-access-4v8jq\") pod \"node-exporter-l9q9t\" (UID: \"1cb8ab19-0564-4182-a7e3-0943c1480663\") " pod="openshift-monitoring/node-exporter-l9q9t" Mar 18 10:10:46.252265 master-0 kubenswrapper[30420]: I0318 10:10:46.252210 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzzjs\" (UniqueName: \"kubernetes.io/projected/1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7-kube-api-access-wzzjs\") pod \"redhat-marketplace-8w5rc\" (UID: \"1bc2b4ba-35ac-4d2d-adb9-362a6c0eb6a7\") " pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:46.273657 master-0 kubenswrapper[30420]: I0318 10:10:46.273521 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqx6m\" (UniqueName: \"kubernetes.io/projected/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-kube-api-access-fqx6m\") pod \"metrics-server-74c475bc87-xx98m\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:10:46.292561 master-0 kubenswrapper[30420]: I0318 10:10:46.292492 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rzsk\" (UniqueName: \"kubernetes.io/projected/74795f5d-dcd7-4723-8931-c34b59ce3087-kube-api-access-8rzsk\") pod \"network-check-target-42l55\" (UID: \"74795f5d-dcd7-4723-8931-c34b59ce3087\") " pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 10:10:46.312482 master-0 kubenswrapper[30420]: I0318 10:10:46.312406 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn7zt\" (UniqueName: \"kubernetes.io/projected/f875878f-3588-42f1-9488-750d9f4582f8-kube-api-access-nn7zt\") pod \"multus-admission-controller-58c9f8fc64-ssnvh\" (UID: \"f875878f-3588-42f1-9488-750d9f4582f8\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-ssnvh" Mar 18 10:10:46.333525 master-0 kubenswrapper[30420]: I0318 10:10:46.333458 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkvcs\" (UniqueName: \"kubernetes.io/projected/af1bbeee-1faf-43d1-943f-ee5319cef4e9-kube-api-access-nkvcs\") pod \"openshift-state-metrics-5dc6c74576-6rrn7\" (UID: \"af1bbeee-1faf-43d1-943f-ee5319cef4e9\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-6rrn7" Mar 18 10:10:46.350163 master-0 kubenswrapper[30420]: I0318 10:10:46.350085 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a6a616d-012a-479e-ab3d-b21295ea1805-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-smghb\" (UID: \"6a6a616d-012a-479e-ab3d-b21295ea1805\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-smghb" Mar 18 10:10:46.374750 master-0 kubenswrapper[30420]: I0318 10:10:46.374665 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw4s4\" (UniqueName: \"kubernetes.io/projected/8b906fc0-f2bf-4586-97e6-921bbd467b65-kube-api-access-rw4s4\") pod \"apiserver-6d58f9cc86-7vcln\" (UID: \"8b906fc0-f2bf-4586-97e6-921bbd467b65\") " pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:46.398885 master-0 kubenswrapper[30420]: I0318 10:10:46.398792 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25k9g\" (UniqueName: \"kubernetes.io/projected/ee376320-9ca0-444d-ab37-9cbcb6729b11-kube-api-access-25k9g\") pod \"catalog-operator-68f85b4d6c-fhz5s\" (UID: \"ee376320-9ca0-444d-ab37-9cbcb6729b11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 10:10:46.411083 master-0 kubenswrapper[30420]: I0318 10:10:46.410928 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcj8f\" (UniqueName: \"kubernetes.io/projected/03de1ea6-da57-4e13-8e5a-d5e10a9f9957-kube-api-access-hcj8f\") pod \"multus-xgdvw\" (UID: \"03de1ea6-da57-4e13-8e5a-d5e10a9f9957\") " pod="openshift-multus/multus-xgdvw" Mar 18 10:10:46.433232 master-0 kubenswrapper[30420]: I0318 10:10:46.433049 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmffc\" (UniqueName: \"kubernetes.io/projected/9d02e790-b9d0-4e2d-a97d-ec2eaf720f28-kube-api-access-gmffc\") pod \"ovnkube-node-frnfl\" (UID: \"9d02e790-b9d0-4e2d-a97d-ec2eaf720f28\") " pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:46.449183 master-0 kubenswrapper[30420]: I0318 10:10:46.449106 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/accc57fb-75f5-4f89-9804-6ede7f77e27c-bound-sa-token\") pod \"ingress-operator-66b84d69b-kr5kz\" (UID: \"accc57fb-75f5-4f89-9804-6ede7f77e27c\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-kr5kz" Mar 18 10:10:46.468419 master-0 kubenswrapper[30420]: I0318 10:10:46.468352 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vj8tt\" (UID: \"3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vj8tt" Mar 18 10:10:46.492437 master-0 kubenswrapper[30420]: I0318 10:10:46.492349 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2g9q\" (UniqueName: \"kubernetes.io/projected/b6948f93-b573-4f09-b754-aaa2269e2875-kube-api-access-t2g9q\") pod \"operator-controller-controller-manager-57777556ff-77n8q\" (UID: \"b6948f93-b573-4f09-b754-aaa2269e2875\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:46.512765 master-0 kubenswrapper[30420]: I0318 10:10:46.512667 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvhfc\" (UniqueName: \"kubernetes.io/projected/71755097-7543-48f8-8925-0e21650bf8f6-kube-api-access-qvhfc\") pod \"insights-operator-68bf6ff9d6-bdcw7\" (UID: \"71755097-7543-48f8-8925-0e21650bf8f6\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bdcw7" Mar 18 10:10:46.531250 master-0 kubenswrapper[30420]: I0318 10:10:46.531133 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxj5c\" (UniqueName: \"kubernetes.io/projected/d0605021-862d-424a-a4c1-037fb005b77e-kube-api-access-cxj5c\") pod \"ovnkube-control-plane-57f769d897-8txtx\" (UID: \"d0605021-862d-424a-a4c1-037fb005b77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-8txtx" Mar 18 10:10:46.551258 master-0 kubenswrapper[30420]: I0318 10:10:46.551206 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xttqt\" (UniqueName: \"kubernetes.io/projected/9f5c64aa-676e-4e48-b714-02f6edb1d361-kube-api-access-xttqt\") pod \"cluster-autoscaler-operator-866dc4744-mw9tt\" (UID: \"9f5c64aa-676e-4e48-b714-02f6edb1d361\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-mw9tt" Mar 18 10:10:46.575018 master-0 kubenswrapper[30420]: I0318 10:10:46.574955 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blfkg\" (UniqueName: \"kubernetes.io/projected/9cfd2323-c33a-4d80-9c25-710920c0e605-kube-api-access-blfkg\") pod \"prometheus-operator-6c8df6d4b-886k6\" (UID: \"9cfd2323-c33a-4d80-9c25-710920c0e605\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-886k6" Mar 18 10:10:46.592287 master-0 kubenswrapper[30420]: I0318 10:10:46.592234 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp8vt\" (UniqueName: \"kubernetes.io/projected/1ad4aa30-f7d5-47ca-b01e-2643f7195685-kube-api-access-fp8vt\") pod \"machine-approver-5c6485487f-95jvh\" (UID: \"1ad4aa30-f7d5-47ca-b01e-2643f7195685\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-95jvh" Mar 18 10:10:46.611772 master-0 kubenswrapper[30420]: I0318 10:10:46.611687 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmxj9\" (UniqueName: \"kubernetes.io/projected/2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0-kube-api-access-gmxj9\") pod \"machine-config-controller-b4f87c5b9-rslmx\" (UID: \"2f9bc248-ebcb-4ce9-99b8-a10c53e08ba0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-rslmx" Mar 18 10:10:46.632192 master-0 kubenswrapper[30420]: I0318 10:10:46.632115 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/432f611b-a1a2-4cc9-b005-17a16413d281-kube-api-access\") pod \"cluster-version-operator-7d58488df-9nd2s\" (UID: \"432f611b-a1a2-4cc9-b005-17a16413d281\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-9nd2s" Mar 18 10:10:46.663111 master-0 kubenswrapper[30420]: I0318 10:10:46.663042 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxf74\" (UniqueName: \"kubernetes.io/projected/aa4cba67-b5d4-46c2-8cad-1a1379f764cb-kube-api-access-sxf74\") pod \"telemeter-client-585cb8cdb6-g2jjm\" (UID: \"aa4cba67-b5d4-46c2-8cad-1a1379f764cb\") " pod="openshift-monitoring/telemeter-client-585cb8cdb6-g2jjm" Mar 18 10:10:46.673597 master-0 kubenswrapper[30420]: I0318 10:10:46.673505 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btrk\" (UniqueName: \"kubernetes.io/projected/2d014721-ed53-447a-b737-c496bbba18be-kube-api-access-4btrk\") pod \"machine-config-operator-84d549f6d5-gnl5t\" (UID: \"2d014721-ed53-447a-b737-c496bbba18be\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-gnl5t" Mar 18 10:10:46.699440 master-0 kubenswrapper[30420]: I0318 10:10:46.699371 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x46bf\" (UniqueName: \"kubernetes.io/projected/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-kube-api-access-x46bf\") pod \"controller-manager-6c87d45bb4-vxcx9\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:46.707496 master-0 kubenswrapper[30420]: I0318 10:10:46.707459 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2nt\" (UniqueName: \"kubernetes.io/projected/29fbc78b-1887-40d4-8165-f0f7cc40b583-kube-api-access-vm2nt\") pod \"machine-api-operator-6fbb6cf6f9-xnvn9\" (UID: \"29fbc78b-1887-40d4-8165-f0f7cc40b583\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-xnvn9" Mar 18 10:10:46.730726 master-0 kubenswrapper[30420]: E0318 10:10:46.730682 30420 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 10:10:46.730726 master-0 kubenswrapper[30420]: E0318 10:10:46.730729 30420 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 10:10:46.730961 master-0 kubenswrapper[30420]: E0318 10:10:46.730858 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access podName:a3657106-1eea-4031-8c92-85ba6287b425 nodeName:}" failed. No retries permitted until 2026-03-18 10:10:47.230815794 +0000 UTC m=+11.283561723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access") pod "installer-4-retry-1-master-0" (UID: "a3657106-1eea-4031-8c92-85ba6287b425") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 10:10:46.751262 master-0 kubenswrapper[30420]: I0318 10:10:46.751161 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b46jq\" (UniqueName: \"kubernetes.io/projected/5ea90fee-5b5e-4b59-bfc4-969ee8c7912e-kube-api-access-b46jq\") pod \"service-ca-79bc6b8d76-jjcsv\" (UID: \"5ea90fee-5b5e-4b59-bfc4-969ee8c7912e\") " pod="openshift-service-ca/service-ca-79bc6b8d76-jjcsv" Mar 18 10:10:46.769937 master-0 kubenswrapper[30420]: I0318 10:10:46.769768 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4qbs\" (UniqueName: \"kubernetes.io/projected/aaadd000-4db7-4264-bfc1-b0ad63c8fb05-kube-api-access-v4qbs\") pod \"network-check-source-b4bf74f6-4kpnv\" (UID: \"aaadd000-4db7-4264-bfc1-b0ad63c8fb05\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-4kpnv" Mar 18 10:10:46.787377 master-0 kubenswrapper[30420]: I0318 10:10:46.787309 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8sl\" (UniqueName: \"kubernetes.io/projected/91331360-dc70-45bb-a815-e00664bae6c4-kube-api-access-8w8sl\") pod \"multus-additional-cni-plugins-dg6dw\" (UID: \"91331360-dc70-45bb-a815-e00664bae6c4\") " pod="openshift-multus/multus-additional-cni-plugins-dg6dw" Mar 18 10:10:46.808699 master-0 kubenswrapper[30420]: I0318 10:10:46.808630 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k29kr\" (UniqueName: \"kubernetes.io/projected/0945a421-d7c4-46df-b3d9-507443627d51-kube-api-access-k29kr\") pod \"redhat-operators-jl7c8\" (UID: \"0945a421-d7c4-46df-b3d9-507443627d51\") " pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:46.814783 master-0 kubenswrapper[30420]: I0318 10:10:46.814671 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access\") pod \"a3657106-1eea-4031-8c92-85ba6287b425\" (UID: \"a3657106-1eea-4031-8c92-85ba6287b425\") " Mar 18 10:10:46.821038 master-0 kubenswrapper[30420]: I0318 10:10:46.820972 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3657106-1eea-4031-8c92-85ba6287b425" (UID: "a3657106-1eea-4031-8c92-85ba6287b425"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:10:46.827698 master-0 kubenswrapper[30420]: I0318 10:10:46.827641 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f25pg\" (UniqueName: \"kubernetes.io/projected/f076eaf0-b041-4db0-ba06-3d85e23bb654-kube-api-access-f25pg\") pod \"authentication-operator-5885bfd7f4-4q9tr\" (UID: \"f076eaf0-b041-4db0-ba06-3d85e23bb654\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-4q9tr" Mar 18 10:10:46.916639 master-0 kubenswrapper[30420]: I0318 10:10:46.916576 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3657106-1eea-4031-8c92-85ba6287b425-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:10:46.952019 master-0 kubenswrapper[30420]: E0318 10:10:46.951955 30420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:46.952299 master-0 kubenswrapper[30420]: E0318 10:10:46.952248 30420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 10:10:46.953407 master-0 kubenswrapper[30420]: E0318 10:10:46.953359 30420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:46.960147 master-0 kubenswrapper[30420]: E0318 10:10:46.959183 30420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 18 10:10:46.998125 master-0 kubenswrapper[30420]: E0318 10:10:46.998052 30420 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.665s" Mar 18 10:10:46.998125 master-0 kubenswrapper[30420]: I0318 10:10:46.998096 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 10:10:46.998125 master-0 kubenswrapper[30420]: I0318 10:10:46.998110 30420 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="12ad3f33-0f81-4684-b296-86becb421afc" Mar 18 10:10:46.998125 master-0 kubenswrapper[30420]: I0318 10:10:46.998132 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"ceb0752eea3da310ec4f97706cc49b9e5802cdc6a08264ab2c0725b45c7967d0"} Mar 18 10:10:46.998584 master-0 kubenswrapper[30420]: I0318 10:10:46.998189 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:10:46.998584 master-0 kubenswrapper[30420]: I0318 10:10:46.998205 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c"} Mar 18 10:10:46.998584 master-0 kubenswrapper[30420]: I0318 10:10:46.998233 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-495pg" Mar 18 10:10:46.998584 master-0 kubenswrapper[30420]: I0318 10:10:46.998246 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerDied","Data":"ceb0752eea3da310ec4f97706cc49b9e5802cdc6a08264ab2c0725b45c7967d0"} Mar 18 10:10:47.014351 master-0 kubenswrapper[30420]: I0318 10:10:47.014275 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 10:10:47.038522 master-0 kubenswrapper[30420]: I0318 10:10:47.038347 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-retry-1-master-0" event={"ID":"a3657106-1eea-4031-8c92-85ba6287b425","Type":"ContainerDied","Data":"3acdf5b69c1ce66294030ac402e9c8e09366d47522c5ff94a22e2363f49e4024"} Mar 18 10:10:47.038929 master-0 kubenswrapper[30420]: I0318 10:10:47.038797 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3acdf5b69c1ce66294030ac402e9c8e09366d47522c5ff94a22e2363f49e4024" Mar 18 10:10:47.039217 master-0 kubenswrapper[30420]: I0318 10:10:47.039183 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 10:10:47.039347 master-0 kubenswrapper[30420]: I0318 10:10:47.039323 30420 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="12ad3f33-0f81-4684-b296-86becb421afc" Mar 18 10:10:47.039521 master-0 kubenswrapper[30420]: I0318 10:10:47.039501 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:47.039667 master-0 kubenswrapper[30420]: I0318 10:10:47.039646 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 10:10:47.039991 master-0 kubenswrapper[30420]: I0318 10:10:47.039971 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:47.040135 master-0 kubenswrapper[30420]: I0318 10:10:47.040116 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 10:10:47.040355 master-0 kubenswrapper[30420]: I0318 10:10:47.040336 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z9sf5" Mar 18 10:10:47.040514 master-0 kubenswrapper[30420]: I0318 10:10:47.040496 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:47.040754 master-0 kubenswrapper[30420]: I0318 10:10:47.040733 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:47.040916 master-0 kubenswrapper[30420]: I0318 10:10:47.040897 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 10:10:47.041061 master-0 kubenswrapper[30420]: I0318 10:10:47.041041 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-z9sf5" Mar 18 10:10:47.041339 master-0 kubenswrapper[30420]: I0318 10:10:47.041319 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:47.041667 master-0 kubenswrapper[30420]: I0318 10:10:47.041614 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 10:10:47.041987 master-0 kubenswrapper[30420]: I0318 10:10:47.041964 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:47.042289 master-0 kubenswrapper[30420]: I0318 10:10:47.042248 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:47.042498 master-0 kubenswrapper[30420]: I0318 10:10:47.042453 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 10:10:47.043478 master-0 kubenswrapper[30420]: I0318 10:10:47.043075 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:47.043601 master-0 kubenswrapper[30420]: I0318 10:10:47.043492 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-42l55" Mar 18 10:10:47.043601 master-0 kubenswrapper[30420]: I0318 10:10:47.043537 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:47.043719 master-0 kubenswrapper[30420]: I0318 10:10:47.043610 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:47.043719 master-0 kubenswrapper[30420]: I0318 10:10:47.043637 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 10:10:47.043719 master-0 kubenswrapper[30420]: I0318 10:10:47.043659 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4wcqx" Mar 18 10:10:47.043719 master-0 kubenswrapper[30420]: I0318 10:10:47.043711 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:47.043899 master-0 kubenswrapper[30420]: I0318 10:10:47.043736 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:47.043899 master-0 kubenswrapper[30420]: I0318 10:10:47.043786 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:47.043899 master-0 kubenswrapper[30420]: I0318 10:10:47.043873 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:47.044009 master-0 kubenswrapper[30420]: I0318 10:10:47.043911 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:47.044009 master-0 kubenswrapper[30420]: I0318 10:10:47.043942 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:47.044009 master-0 kubenswrapper[30420]: I0318 10:10:47.043967 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:47.044009 master-0 kubenswrapper[30420]: I0318 10:10:47.043995 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:47.044162 master-0 kubenswrapper[30420]: I0318 10:10:47.044021 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-nq7mw" Mar 18 10:10:47.044162 master-0 kubenswrapper[30420]: I0318 10:10:47.044046 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 10:10:47.044162 master-0 kubenswrapper[30420]: I0318 10:10:47.044091 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:47.044162 master-0 kubenswrapper[30420]: I0318 10:10:47.044131 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:47.044308 master-0 kubenswrapper[30420]: I0318 10:10:47.044222 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:47.044628 master-0 kubenswrapper[30420]: I0318 10:10:47.044588 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:47.044689 master-0 kubenswrapper[30420]: I0318 10:10:47.044643 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-fhz5s" Mar 18 10:10:47.044689 master-0 kubenswrapper[30420]: I0318 10:10:47.044686 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:47.046229 master-0 kubenswrapper[30420]: I0318 10:10:47.046179 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:47.046365 master-0 kubenswrapper[30420]: I0318 10:10:47.046352 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:47.046480 master-0 kubenswrapper[30420]: I0318 10:10:47.046466 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-77n8q" Mar 18 10:10:47.046585 master-0 kubenswrapper[30420]: I0318 10:10:47.046561 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:10:47.046688 master-0 kubenswrapper[30420]: I0318 10:10:47.046673 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:47.048236 master-0 kubenswrapper[30420]: I0318 10:10:47.048189 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:10:47.087898 master-0 kubenswrapper[30420]: I0318 10:10:47.087845 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:47.556518 master-0 kubenswrapper[30420]: I0318 10:10:47.556422 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:47.564234 master-0 kubenswrapper[30420]: I0318 10:10:47.564176 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:10:47.742892 master-0 kubenswrapper[30420]: I0318 10:10:47.742787 30420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 10:10:47.745091 master-0 kubenswrapper[30420]: I0318 10:10:47.744990 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8f17c553-e707-4d8d-bd31-e8f28f3898bb" Mar 18 10:10:47.745091 master-0 kubenswrapper[30420]: I0318 10:10:47.745035 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8f17c553-e707-4d8d-bd31-e8f28f3898bb" Mar 18 10:10:47.779199 master-0 kubenswrapper[30420]: I0318 10:10:47.779131 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:47.779199 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:47.779199 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:47.779199 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:47.780000 master-0 kubenswrapper[30420]: I0318 10:10:47.779206 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:47.789207 master-0 kubenswrapper[30420]: E0318 10:10:47.789155 30420 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:47.789767 master-0 kubenswrapper[30420]: I0318 10:10:47.789564 30420 scope.go:117] "RemoveContainer" containerID="ceb0752eea3da310ec4f97706cc49b9e5802cdc6a08264ab2c0725b45c7967d0" Mar 18 10:10:47.804907 master-0 kubenswrapper[30420]: I0318 10:10:47.804813 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:10:47.811147 master-0 kubenswrapper[30420]: I0318 10:10:47.809963 30420 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:10:47.811611 master-0 kubenswrapper[30420]: I0318 10:10:47.811547 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:10:47.844913 master-0 kubenswrapper[30420]: I0318 10:10:47.844867 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 10:10:47.958510 master-0 kubenswrapper[30420]: I0318 10:10:47.958416 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=1.958397042 podStartE2EDuration="1.958397042s" podCreationTimestamp="2026-03-18 10:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:10:47.958368261 +0000 UTC m=+12.011114200" watchObservedRunningTime="2026-03-18 10:10:47.958397042 +0000 UTC m=+12.011142971" Mar 18 10:10:48.748936 master-0 kubenswrapper[30420]: I0318 10:10:48.748859 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-check-endpoints/0.log" Mar 18 10:10:48.750915 master-0 kubenswrapper[30420]: I0318 10:10:48.750866 30420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 10:10:48.751008 master-0 kubenswrapper[30420]: I0318 10:10:48.750907 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d"} Mar 18 10:10:48.751540 master-0 kubenswrapper[30420]: I0318 10:10:48.751490 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8f17c553-e707-4d8d-bd31-e8f28f3898bb" Mar 18 10:10:48.751540 master-0 kubenswrapper[30420]: I0318 10:10:48.751521 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8f17c553-e707-4d8d-bd31-e8f28f3898bb" Mar 18 10:10:48.779683 master-0 kubenswrapper[30420]: I0318 10:10:48.779621 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:48.779683 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:48.779683 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:48.779683 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:48.780028 master-0 kubenswrapper[30420]: I0318 10:10:48.779732 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:48.812530 master-0 kubenswrapper[30420]: I0318 10:10:48.812455 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:49.180066 master-0 kubenswrapper[30420]: I0318 10:10:49.179988 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:10:49.697433 master-0 kubenswrapper[30420]: I0318 10:10:49.695562 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=3.695544075 podStartE2EDuration="3.695544075s" podCreationTimestamp="2026-03-18 10:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:10:49.646255198 +0000 UTC m=+13.699001127" watchObservedRunningTime="2026-03-18 10:10:49.695544075 +0000 UTC m=+13.748290004" Mar 18 10:10:49.761287 master-0 kubenswrapper[30420]: I0318 10:10:49.760428 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:10:49.778840 master-0 kubenswrapper[30420]: I0318 10:10:49.777810 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:49.778840 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:49.778840 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:49.778840 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:49.778840 master-0 kubenswrapper[30420]: I0318 10:10:49.777889 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:49.856711 master-0 kubenswrapper[30420]: I0318 10:10:49.856621 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.85659226 podStartE2EDuration="2.85659226s" podCreationTimestamp="2026-03-18 10:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:10:49.849143643 +0000 UTC m=+13.901889572" watchObservedRunningTime="2026-03-18 10:10:49.85659226 +0000 UTC m=+13.909338189" Mar 18 10:10:50.777945 master-0 kubenswrapper[30420]: I0318 10:10:50.777874 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:50.777945 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:50.777945 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:50.777945 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:50.778620 master-0 kubenswrapper[30420]: I0318 10:10:50.777962 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:51.387965 master-0 kubenswrapper[30420]: I0318 10:10:51.387816 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-687747fbb4-k7dnf" Mar 18 10:10:51.635848 master-0 kubenswrapper[30420]: I0318 10:10:51.635738 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:10:51.637347 master-0 kubenswrapper[30420]: I0318 10:10:51.637311 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-2glpv" Mar 18 10:10:51.690949 master-0 kubenswrapper[30420]: I0318 10:10:51.689705 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6d58f9cc86-7vcln" Mar 18 10:10:51.783847 master-0 kubenswrapper[30420]: I0318 10:10:51.776783 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:51.783847 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:51.783847 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:51.783847 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:51.783847 master-0 kubenswrapper[30420]: I0318 10:10:51.776879 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:51.823862 master-0 kubenswrapper[30420]: I0318 10:10:51.820453 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:51.823862 master-0 kubenswrapper[30420]: I0318 10:10:51.820583 30420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 10:10:51.829212 master-0 kubenswrapper[30420]: I0318 10:10:51.829169 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:52.590719 master-0 kubenswrapper[30420]: I0318 10:10:52.590656 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:52.778301 master-0 kubenswrapper[30420]: I0318 10:10:52.778236 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:52.778301 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:52.778301 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:52.778301 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:52.778648 master-0 kubenswrapper[30420]: I0318 10:10:52.778327 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:52.804083 master-0 kubenswrapper[30420]: I0318 10:10:52.804036 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:10:53.426346 master-0 kubenswrapper[30420]: I0318 10:10:53.426287 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 10:10:53.428403 master-0 kubenswrapper[30420]: I0318 10:10:53.428373 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-r8fkv" Mar 18 10:10:53.656357 master-0 kubenswrapper[30420]: I0318 10:10:53.656287 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 10:10:53.666596 master-0 kubenswrapper[30420]: I0318 10:10:53.665196 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-hc74k" Mar 18 10:10:53.787298 master-0 kubenswrapper[30420]: I0318 10:10:53.785011 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:53.787298 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:53.787298 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:53.787298 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:53.787298 master-0 kubenswrapper[30420]: I0318 10:10:53.785069 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:54.083953 master-0 kubenswrapper[30420]: I0318 10:10:54.083747 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:10:54.777006 master-0 kubenswrapper[30420]: I0318 10:10:54.776931 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:54.777006 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:54.777006 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:54.777006 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:54.777006 master-0 kubenswrapper[30420]: I0318 10:10:54.776995 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:55.672115 master-0 kubenswrapper[30420]: I0318 10:10:55.672048 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:55.713525 master-0 kubenswrapper[30420]: I0318 10:10:55.713455 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nzqck" Mar 18 10:10:55.777194 master-0 kubenswrapper[30420]: I0318 10:10:55.777136 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:55.777194 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:55.777194 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:55.777194 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:55.777539 master-0 kubenswrapper[30420]: I0318 10:10:55.777213 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:55.880512 master-0 kubenswrapper[30420]: I0318 10:10:55.880468 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:55.884178 master-0 kubenswrapper[30420]: I0318 10:10:55.884142 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7b64dcc66c-2vx58" Mar 18 10:10:56.432884 master-0 kubenswrapper[30420]: I0318 10:10:56.432838 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8w5rc" Mar 18 10:10:56.778317 master-0 kubenswrapper[30420]: I0318 10:10:56.778195 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:56.778317 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:56.778317 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:56.778317 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:56.778317 master-0 kubenswrapper[30420]: I0318 10:10:56.778255 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:57.045002 master-0 kubenswrapper[30420]: I0318 10:10:57.044924 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jl7c8" Mar 18 10:10:57.778361 master-0 kubenswrapper[30420]: I0318 10:10:57.778247 30420 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-82tbk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 10:10:57.778361 master-0 kubenswrapper[30420]: [-]has-synced failed: reason withheld Mar 18 10:10:57.778361 master-0 kubenswrapper[30420]: [+]process-running ok Mar 18 10:10:57.778361 master-0 kubenswrapper[30420]: healthz check failed Mar 18 10:10:57.779708 master-0 kubenswrapper[30420]: I0318 10:10:57.778369 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" podUID="43d54514-989c-4c82-93f9-153b44eacdd1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 10:10:58.785892 master-0 kubenswrapper[30420]: I0318 10:10:58.785452 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:58.790079 master-0 kubenswrapper[30420]: I0318 10:10:58.789991 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7dcf5569b5-82tbk" Mar 18 10:10:59.241658 master-0 kubenswrapper[30420]: I0318 10:10:59.241465 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:10:59.319502 master-0 kubenswrapper[30420]: I0318 10:10:59.319422 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pdfn6" Mar 18 10:11:02.092978 master-0 kubenswrapper[30420]: I0318 10:11:02.092813 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:11:02.907240 master-0 kubenswrapper[30420]: I0318 10:11:02.907148 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:11:03.424164 master-0 kubenswrapper[30420]: I0318 10:11:03.424070 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:11:08.427818 master-0 kubenswrapper[30420]: I0318 10:11:08.427729 30420 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 10:11:08.429037 master-0 kubenswrapper[30420]: I0318 10:11:08.428117 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" containerID="cri-o://66dba26b707d8a7ef9a56c2e052eb81cdb6a21e228ccc4ca178ec7f65804ffae" gracePeriod=5 Mar 18 10:11:13.665899 master-0 kubenswrapper[30420]: I0318 10:11:13.665772 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:11:13.667436 master-0 kubenswrapper[30420]: I0318 10:11:13.666082 30420 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 10:11:13.698903 master-0 kubenswrapper[30420]: I0318 10:11:13.698650 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frnfl" Mar 18 10:11:13.997685 master-0 kubenswrapper[30420]: I0318 10:11:13.997615 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 18 10:11:13.997903 master-0 kubenswrapper[30420]: I0318 10:11:13.997703 30420 generic.go:334] "Generic (PLEG): container finished" podID="16fb4ea7f83036d9c6adf3454fc7e9db" containerID="66dba26b707d8a7ef9a56c2e052eb81cdb6a21e228ccc4ca178ec7f65804ffae" exitCode=137 Mar 18 10:11:13.999435 master-0 kubenswrapper[30420]: I0318 10:11:13.999390 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03355a5e2caa4496c4b10efd4243dd60c302d54b340a80972ebe3e5661f0dd6b" Mar 18 10:11:14.025356 master-0 kubenswrapper[30420]: I0318 10:11:14.025264 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 18 10:11:14.025578 master-0 kubenswrapper[30420]: I0318 10:11:14.025453 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:11:14.044223 master-0 kubenswrapper[30420]: I0318 10:11:14.044125 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 10:11:14.044433 master-0 kubenswrapper[30420]: I0318 10:11:14.044330 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 10:11:14.044433 master-0 kubenswrapper[30420]: I0318 10:11:14.044426 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 10:11:14.044544 master-0 kubenswrapper[30420]: I0318 10:11:14.044484 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 10:11:14.044544 master-0 kubenswrapper[30420]: I0318 10:11:14.044503 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock" (OuterVolumeSpecName: "var-lock") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:11:14.044739 master-0 kubenswrapper[30420]: I0318 10:11:14.044516 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 10:11:14.044739 master-0 kubenswrapper[30420]: I0318 10:11:14.044610 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests" (OuterVolumeSpecName: "manifests") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:11:14.044739 master-0 kubenswrapper[30420]: I0318 10:11:14.044623 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log" (OuterVolumeSpecName: "var-log") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:11:14.044892 master-0 kubenswrapper[30420]: I0318 10:11:14.044775 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:11:14.045415 master-0 kubenswrapper[30420]: I0318 10:11:14.045364 30420 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 10:11:14.045415 master-0 kubenswrapper[30420]: I0318 10:11:14.045407 30420 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:11:14.045534 master-0 kubenswrapper[30420]: I0318 10:11:14.045428 30420 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 10:11:14.045534 master-0 kubenswrapper[30420]: I0318 10:11:14.045446 30420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:11:14.051796 master-0 kubenswrapper[30420]: I0318 10:11:14.051734 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:11:14.146780 master-0 kubenswrapper[30420]: I0318 10:11:14.146711 30420 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:11:14.184930 master-0 kubenswrapper[30420]: I0318 10:11:14.182537 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" path="/var/lib/kubelet/pods/16fb4ea7f83036d9c6adf3454fc7e9db/volumes" Mar 18 10:11:14.184930 master-0 kubenswrapper[30420]: I0318 10:11:14.183303 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 18 10:11:14.208122 master-0 kubenswrapper[30420]: I0318 10:11:14.207193 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 10:11:14.208122 master-0 kubenswrapper[30420]: I0318 10:11:14.207263 30420 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="ebdcf0af-1698-44e9-8593-78e0e3bef381" Mar 18 10:11:14.210142 master-0 kubenswrapper[30420]: I0318 10:11:14.210081 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 10:11:14.210226 master-0 kubenswrapper[30420]: I0318 10:11:14.210141 30420 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="ebdcf0af-1698-44e9-8593-78e0e3bef381" Mar 18 10:11:15.007491 master-0 kubenswrapper[30420]: I0318 10:11:15.007398 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:11:22.099805 master-0 kubenswrapper[30420]: I0318 10:11:22.099698 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:11:22.108779 master-0 kubenswrapper[30420]: I0318 10:11:22.108705 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:11:26.260327 master-0 kubenswrapper[30420]: I0318 10:11:26.260256 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 10:11:26.709211 master-0 kubenswrapper[30420]: I0318 10:11:26.709137 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: E0318 10:11:26.709696 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6716938-ca14-4000-b7f1-b60e93e93c0d" containerName="installer" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: I0318 10:11:26.709727 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6716938-ca14-4000-b7f1-b60e93e93c0d" containerName="installer" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: E0318 10:11:26.709792 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8bd84c-8035-4bec-a725-b0ae89382c0f" containerName="installer" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: I0318 10:11:26.709804 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8bd84c-8035-4bec-a725-b0ae89382c0f" containerName="installer" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: E0318 10:11:26.709862 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90db95c5-2017-4b04-b11c-9844947c5be9" containerName="installer" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: I0318 10:11:26.709877 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="90db95c5-2017-4b04-b11c-9844947c5be9" containerName="installer" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: E0318 10:11:26.709902 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54a208d1-afe8-49b5-92e0-e27afb4abc80" containerName="installer" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: I0318 10:11:26.709946 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="54a208d1-afe8-49b5-92e0-e27afb4abc80" containerName="installer" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: E0318 10:11:26.709968 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: I0318 10:11:26.709979 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: E0318 10:11:26.709997 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: I0318 10:11:26.710040 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: E0318 10:11:26.710055 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: I0318 10:11:26.710065 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: E0318 10:11:26.710079 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-recovery-controller" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: I0318 10:11:26.710121 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-recovery-controller" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: E0318 10:11:26.710143 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcf01f63-ed66-4f0d-b2df-97c77bbf8543" containerName="installer" Mar 18 10:11:26.710134 master-0 kubenswrapper[30420]: I0318 10:11:26.710155 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcf01f63-ed66-4f0d-b2df-97c77bbf8543" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710205 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="wait-for-host-port" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710218 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="wait-for-host-port" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710235 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346d6f79-a9bd-4097-abe7-b68830aa2e84" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710245 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="346d6f79-a9bd-4097-abe7-b68830aa2e84" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710295 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-cert-syncer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710309 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-cert-syncer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710322 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710332 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710374 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea5939e-5f4d-4028-9384-2ec5710ecdc8" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710389 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea5939e-5f4d-4028-9384-2ec5710ecdc8" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710404 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c62ceda-5e7e-4392-83b9-0d80856e1a96" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710414 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c62ceda-5e7e-4392-83b9-0d80856e1a96" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710426 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerName="assisted-installer-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710468 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerName="assisted-installer-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710484 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710495 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710509 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-cert-syncer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710549 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-cert-syncer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710569 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710580 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710594 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb70bf3-93cd-4000-be1a-8e21846d5709" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710604 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb70bf3-93cd-4000-be1a-8e21846d5709" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710656 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710669 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710687 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-recovery-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710731 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-recovery-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710748 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3657106-1eea-4031-8c92-85ba6287b425" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710758 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3657106-1eea-4031-8c92-85ba6287b425" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710805 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a8662e-66f1-4aee-9344-564bb4ac4f9a" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710860 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a8662e-66f1-4aee-9344-564bb4ac4f9a" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710885 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d7edd6-7975-468e-adea-138d92ed1be1" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710896 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d7edd6-7975-468e-adea-138d92ed1be1" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: E0318 10:11:26.710915 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.710959 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711230 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="wait-for-host-port" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711259 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c62ceda-5e7e-4392-83b9-0d80856e1a96" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711285 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d7edd6-7975-468e-adea-138d92ed1be1" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711306 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6716938-ca14-4000-b7f1-b60e93e93c0d" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711320 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711343 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711357 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8bd84c-8035-4bec-a725-b0ae89382c0f" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711370 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-cert-syncer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711386 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3657106-1eea-4031-8c92-85ba6287b425" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711401 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea5939e-5f4d-4028-9384-2ec5710ecdc8" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711418 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="54a208d1-afe8-49b5-92e0-e27afb4abc80" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711436 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cda3479-c3ed-4d79-bbd3-888e64b328f7" containerName="assisted-installer-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711451 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-cert-syncer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711469 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711479 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="kube-controller-manager-recovery-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711493 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711503 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a8662e-66f1-4aee-9344-564bb4ac4f9a" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711516 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27b7d086edf5d2cf47b703574641d8" containerName="kube-scheduler-recovery-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711544 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcf01f63-ed66-4f0d-b2df-97c77bbf8543" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711558 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711576 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="346d6f79-a9bd-4097-abe7-b68830aa2e84" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711593 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e875368eec13e995ea08015e08c42" containerName="cluster-policy-controller" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711611 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="449dc8b3-72b7-4be5-b5ab-ed4d632f52b2" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711627 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb70bf3-93cd-4000-be1a-8e21846d5709" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711637 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="90db95c5-2017-4b04-b11c-9844947c5be9" containerName="installer" Mar 18 10:11:26.711976 master-0 kubenswrapper[30420]: I0318 10:11:26.711652 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 10:11:26.716425 master-0 kubenswrapper[30420]: I0318 10:11:26.715312 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:26.716425 master-0 kubenswrapper[30420]: I0318 10:11:26.716005 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Mar 18 10:11:26.718111 master-0 kubenswrapper[30420]: I0318 10:11:26.718062 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-zr9bx" Mar 18 10:11:26.721492 master-0 kubenswrapper[30420]: I0318 10:11:26.721153 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 10:11:26.739893 master-0 kubenswrapper[30420]: I0318 10:11:26.739115 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:26.739893 master-0 kubenswrapper[30420]: I0318 10:11:26.739385 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:26.841355 master-0 kubenswrapper[30420]: I0318 10:11:26.841248 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:26.841576 master-0 kubenswrapper[30420]: I0318 10:11:26.841555 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:26.841794 master-0 kubenswrapper[30420]: I0318 10:11:26.841741 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:26.854470 master-0 kubenswrapper[30420]: I0318 10:11:26.854413 30420 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 10:11:26.859140 master-0 kubenswrapper[30420]: I0318 10:11:26.859087 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:27.038363 master-0 kubenswrapper[30420]: I0318 10:11:27.038302 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:27.482246 master-0 kubenswrapper[30420]: I0318 10:11:27.482054 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Mar 18 10:11:28.121962 master-0 kubenswrapper[30420]: I0318 10:11:28.121878 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"bb2f55a1-1af1-49b1-9dbc-d30063d6935e","Type":"ContainerStarted","Data":"ebff7f92cdd8399f6b565354ad7192b90ec6e160cf98114b7f1c745387558557"} Mar 18 10:11:28.121962 master-0 kubenswrapper[30420]: I0318 10:11:28.121947 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"bb2f55a1-1af1-49b1-9dbc-d30063d6935e","Type":"ContainerStarted","Data":"80a26a8c280c45ba9b25c1277b1956a8bb7fedc0a47b519ae908fedfe1146e06"} Mar 18 10:11:28.140978 master-0 kubenswrapper[30420]: I0318 10:11:28.140876 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-master-0" podStartSLOduration=2.140856184 podStartE2EDuration="2.140856184s" podCreationTimestamp="2026-03-18 10:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:11:28.139614793 +0000 UTC m=+52.192360722" watchObservedRunningTime="2026-03-18 10:11:28.140856184 +0000 UTC m=+52.193602113" Mar 18 10:11:29.130407 master-0 kubenswrapper[30420]: I0318 10:11:29.130354 30420 generic.go:334] "Generic (PLEG): container finished" podID="bb2f55a1-1af1-49b1-9dbc-d30063d6935e" containerID="ebff7f92cdd8399f6b565354ad7192b90ec6e160cf98114b7f1c745387558557" exitCode=0 Mar 18 10:11:29.130407 master-0 kubenswrapper[30420]: I0318 10:11:29.130411 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"bb2f55a1-1af1-49b1-9dbc-d30063d6935e","Type":"ContainerDied","Data":"ebff7f92cdd8399f6b565354ad7192b90ec6e160cf98114b7f1c745387558557"} Mar 18 10:11:30.554878 master-0 kubenswrapper[30420]: I0318 10:11:30.554803 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:30.606692 master-0 kubenswrapper[30420]: I0318 10:11:30.606228 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kubelet-dir\") pod \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\" (UID: \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\") " Mar 18 10:11:30.606692 master-0 kubenswrapper[30420]: I0318 10:11:30.606313 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kube-api-access\") pod \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\" (UID: \"bb2f55a1-1af1-49b1-9dbc-d30063d6935e\") " Mar 18 10:11:30.606692 master-0 kubenswrapper[30420]: I0318 10:11:30.606353 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bb2f55a1-1af1-49b1-9dbc-d30063d6935e" (UID: "bb2f55a1-1af1-49b1-9dbc-d30063d6935e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:11:30.606692 master-0 kubenswrapper[30420]: I0318 10:11:30.606585 30420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:11:30.611567 master-0 kubenswrapper[30420]: I0318 10:11:30.611483 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bb2f55a1-1af1-49b1-9dbc-d30063d6935e" (UID: "bb2f55a1-1af1-49b1-9dbc-d30063d6935e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:11:30.707911 master-0 kubenswrapper[30420]: I0318 10:11:30.707845 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb2f55a1-1af1-49b1-9dbc-d30063d6935e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:11:31.144160 master-0 kubenswrapper[30420]: I0318 10:11:31.144037 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"bb2f55a1-1af1-49b1-9dbc-d30063d6935e","Type":"ContainerDied","Data":"80a26a8c280c45ba9b25c1277b1956a8bb7fedc0a47b519ae908fedfe1146e06"} Mar 18 10:11:31.144160 master-0 kubenswrapper[30420]: I0318 10:11:31.144093 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80a26a8c280c45ba9b25c1277b1956a8bb7fedc0a47b519ae908fedfe1146e06" Mar 18 10:11:31.144160 master-0 kubenswrapper[30420]: I0318 10:11:31.144122 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 10:11:36.170426 master-0 kubenswrapper[30420]: I0318 10:11:36.170369 30420 scope.go:117] "RemoveContainer" containerID="0e1b90509e26fef960c00500d9ad97c317d8639e8d0264437904c7c3c438399a" Mar 18 10:11:36.201119 master-0 kubenswrapper[30420]: I0318 10:11:36.200997 30420 scope.go:117] "RemoveContainer" containerID="f94e501b0ad12236c03bc538f983952a18a8058deb0777210379742bce193fde" Mar 18 10:11:36.222809 master-0 kubenswrapper[30420]: I0318 10:11:36.222738 30420 scope.go:117] "RemoveContainer" containerID="eeb871e8e559b9fd82b985e8a38853c6cc1a0962899e9d61d0017f002e610d41" Mar 18 10:11:36.246238 master-0 kubenswrapper[30420]: I0318 10:11:36.246131 30420 scope.go:117] "RemoveContainer" containerID="8a062b1b85a12fd918c3c62a85847e5a60612517f0ee750aabe64bd125668daf" Mar 18 10:11:36.277306 master-0 kubenswrapper[30420]: I0318 10:11:36.277240 30420 scope.go:117] "RemoveContainer" containerID="fce78d10ab44ad6e3870abc2e19feeb6f5ae7acb96a08b13653663840e0cbb1b" Mar 18 10:11:36.303227 master-0 kubenswrapper[30420]: I0318 10:11:36.303144 30420 scope.go:117] "RemoveContainer" containerID="504c7c58af279fedab2f56000cc691abf8096faa6bf0c02f961583e20a138ed6" Mar 18 10:11:36.328343 master-0 kubenswrapper[30420]: I0318 10:11:36.328275 30420 scope.go:117] "RemoveContainer" containerID="e73e9ab6250891a74742cf894dfa6d6f12c07f81c7c6e29abf71445a93b042c6" Mar 18 10:11:36.349937 master-0 kubenswrapper[30420]: I0318 10:11:36.349896 30420 scope.go:117] "RemoveContainer" containerID="3e2c362efe2fe8c48b78a8150b0e9484398aa97bf0cb69d78e0777b3495062fc" Mar 18 10:11:36.369231 master-0 kubenswrapper[30420]: I0318 10:11:36.369186 30420 scope.go:117] "RemoveContainer" containerID="5a898e220fc5eed6a4a32559913535749eb16cc2a7cd17e978e4c62aa7e6452a" Mar 18 10:11:37.027272 master-0 kubenswrapper[30420]: I0318 10:11:37.027206 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 10:11:37.028028 master-0 kubenswrapper[30420]: E0318 10:11:37.027976 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2f55a1-1af1-49b1-9dbc-d30063d6935e" containerName="pruner" Mar 18 10:11:37.028171 master-0 kubenswrapper[30420]: I0318 10:11:37.028020 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2f55a1-1af1-49b1-9dbc-d30063d6935e" containerName="pruner" Mar 18 10:11:37.035187 master-0 kubenswrapper[30420]: I0318 10:11:37.032941 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb2f55a1-1af1-49b1-9dbc-d30063d6935e" containerName="pruner" Mar 18 10:11:37.035187 master-0 kubenswrapper[30420]: I0318 10:11:37.033639 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.037010 master-0 kubenswrapper[30420]: I0318 10:11:37.036965 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 10:11:37.037797 master-0 kubenswrapper[30420]: I0318 10:11:37.037755 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-244m4" Mar 18 10:11:37.044885 master-0 kubenswrapper[30420]: I0318 10:11:37.038230 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 10:11:37.141006 master-0 kubenswrapper[30420]: I0318 10:11:37.140920 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f537225-8565-4515-bbee-3f92c99e0ac0-kube-api-access\") pod \"installer-5-master-0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.141006 master-0 kubenswrapper[30420]: I0318 10:11:37.140997 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-var-lock\") pod \"installer-5-master-0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.141373 master-0 kubenswrapper[30420]: I0318 10:11:37.141100 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.242111 master-0 kubenswrapper[30420]: I0318 10:11:37.242000 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f537225-8565-4515-bbee-3f92c99e0ac0-kube-api-access\") pod \"installer-5-master-0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.242111 master-0 kubenswrapper[30420]: I0318 10:11:37.242073 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-var-lock\") pod \"installer-5-master-0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.242759 master-0 kubenswrapper[30420]: I0318 10:11:37.242174 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-var-lock\") pod \"installer-5-master-0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.242759 master-0 kubenswrapper[30420]: I0318 10:11:37.242206 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.242759 master-0 kubenswrapper[30420]: I0318 10:11:37.242262 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.265125 master-0 kubenswrapper[30420]: I0318 10:11:37.265040 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f537225-8565-4515-bbee-3f92c99e0ac0-kube-api-access\") pod \"installer-5-master-0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.373406 master-0 kubenswrapper[30420]: I0318 10:11:37.373282 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:11:37.903297 master-0 kubenswrapper[30420]: I0318 10:11:37.903219 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 10:11:37.912959 master-0 kubenswrapper[30420]: W0318 10:11:37.912870 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3f537225_8565_4515_bbee_3f92c99e0ac0.slice/crio-df0ebf684703383f7c24a34031402f63abf2f131ff9964f67b9620901b79e9b1 WatchSource:0}: Error finding container df0ebf684703383f7c24a34031402f63abf2f131ff9964f67b9620901b79e9b1: Status 404 returned error can't find the container with id df0ebf684703383f7c24a34031402f63abf2f131ff9964f67b9620901b79e9b1 Mar 18 10:11:38.214267 master-0 kubenswrapper[30420]: I0318 10:11:38.214181 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"3f537225-8565-4515-bbee-3f92c99e0ac0","Type":"ContainerStarted","Data":"df0ebf684703383f7c24a34031402f63abf2f131ff9964f67b9620901b79e9b1"} Mar 18 10:11:39.227525 master-0 kubenswrapper[30420]: I0318 10:11:39.227437 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"3f537225-8565-4515-bbee-3f92c99e0ac0","Type":"ContainerStarted","Data":"6f9f6939311773fd5db347d07edf164ed9775acc249b6458eda38a37ed551e13"} Mar 18 10:11:39.256554 master-0 kubenswrapper[30420]: I0318 10:11:39.256452 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=3.256423875 podStartE2EDuration="3.256423875s" podCreationTimestamp="2026-03-18 10:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:11:39.246490535 +0000 UTC m=+63.299236504" watchObservedRunningTime="2026-03-18 10:11:39.256423875 +0000 UTC m=+63.309169814" Mar 18 10:11:48.607120 master-0 kubenswrapper[30420]: I0318 10:11:48.607031 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 10:11:48.608464 master-0 kubenswrapper[30420]: I0318 10:11:48.607415 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="3f537225-8565-4515-bbee-3f92c99e0ac0" containerName="installer" containerID="cri-o://6f9f6939311773fd5db347d07edf164ed9775acc249b6458eda38a37ed551e13" gracePeriod=30 Mar 18 10:11:52.405128 master-0 kubenswrapper[30420]: I0318 10:11:52.405036 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 10:11:52.406448 master-0 kubenswrapper[30420]: I0318 10:11:52.406354 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.420845 master-0 kubenswrapper[30420]: I0318 10:11:52.420729 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 10:11:52.466742 master-0 kubenswrapper[30420]: I0318 10:11:52.466669 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.466742 master-0 kubenswrapper[30420]: I0318 10:11:52.466752 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-var-lock\") pod \"installer-6-master-0\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.467290 master-0 kubenswrapper[30420]: I0318 10:11:52.466986 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/840c140c-d526-45b2-8c25-9df4c4efd602-kube-api-access\") pod \"installer-6-master-0\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.568185 master-0 kubenswrapper[30420]: I0318 10:11:52.568119 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/840c140c-d526-45b2-8c25-9df4c4efd602-kube-api-access\") pod \"installer-6-master-0\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.568643 master-0 kubenswrapper[30420]: I0318 10:11:52.568611 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.568916 master-0 kubenswrapper[30420]: I0318 10:11:52.568885 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-var-lock\") pod \"installer-6-master-0\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.569176 master-0 kubenswrapper[30420]: I0318 10:11:52.568887 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.569324 master-0 kubenswrapper[30420]: I0318 10:11:52.568957 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-var-lock\") pod \"installer-6-master-0\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.600048 master-0 kubenswrapper[30420]: I0318 10:11:52.599980 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/840c140c-d526-45b2-8c25-9df4c4efd602-kube-api-access\") pod \"installer-6-master-0\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:52.744137 master-0 kubenswrapper[30420]: I0318 10:11:52.743979 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:11:53.280222 master-0 kubenswrapper[30420]: I0318 10:11:53.280116 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 10:11:53.291757 master-0 kubenswrapper[30420]: W0318 10:11:53.291703 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod840c140c_d526_45b2_8c25_9df4c4efd602.slice/crio-3f7c45156a693c0895ff1304114c7f881c174f64f6da0077b924719fad9f9385 WatchSource:0}: Error finding container 3f7c45156a693c0895ff1304114c7f881c174f64f6da0077b924719fad9f9385: Status 404 returned error can't find the container with id 3f7c45156a693c0895ff1304114c7f881c174f64f6da0077b924719fad9f9385 Mar 18 10:11:53.356554 master-0 kubenswrapper[30420]: I0318 10:11:53.356481 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"840c140c-d526-45b2-8c25-9df4c4efd602","Type":"ContainerStarted","Data":"3f7c45156a693c0895ff1304114c7f881c174f64f6da0077b924719fad9f9385"} Mar 18 10:11:54.370602 master-0 kubenswrapper[30420]: I0318 10:11:54.370513 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"840c140c-d526-45b2-8c25-9df4c4efd602","Type":"ContainerStarted","Data":"38067aff857f4b1ac037294440586bce5f3c16951e6528567513c3e8b2cfd90d"} Mar 18 10:12:09.503647 master-0 kubenswrapper[30420]: I0318 10:12:09.503568 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_3f537225-8565-4515-bbee-3f92c99e0ac0/installer/0.log" Mar 18 10:12:09.504255 master-0 kubenswrapper[30420]: I0318 10:12:09.503658 30420 generic.go:334] "Generic (PLEG): container finished" podID="3f537225-8565-4515-bbee-3f92c99e0ac0" containerID="6f9f6939311773fd5db347d07edf164ed9775acc249b6458eda38a37ed551e13" exitCode=1 Mar 18 10:12:09.504255 master-0 kubenswrapper[30420]: I0318 10:12:09.503702 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"3f537225-8565-4515-bbee-3f92c99e0ac0","Type":"ContainerDied","Data":"6f9f6939311773fd5db347d07edf164ed9775acc249b6458eda38a37ed551e13"} Mar 18 10:12:10.507859 master-0 kubenswrapper[30420]: I0318 10:12:10.507680 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_3f537225-8565-4515-bbee-3f92c99e0ac0/installer/0.log" Mar 18 10:12:10.507859 master-0 kubenswrapper[30420]: I0318 10:12:10.507756 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:12:10.512581 master-0 kubenswrapper[30420]: I0318 10:12:10.512110 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_3f537225-8565-4515-bbee-3f92c99e0ac0/installer/0.log" Mar 18 10:12:10.512581 master-0 kubenswrapper[30420]: I0318 10:12:10.512153 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"3f537225-8565-4515-bbee-3f92c99e0ac0","Type":"ContainerDied","Data":"df0ebf684703383f7c24a34031402f63abf2f131ff9964f67b9620901b79e9b1"} Mar 18 10:12:10.512581 master-0 kubenswrapper[30420]: I0318 10:12:10.512197 30420 scope.go:117] "RemoveContainer" containerID="6f9f6939311773fd5db347d07edf164ed9775acc249b6458eda38a37ed551e13" Mar 18 10:12:10.512581 master-0 kubenswrapper[30420]: I0318 10:12:10.512292 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 10:12:10.536142 master-0 kubenswrapper[30420]: I0318 10:12:10.534278 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=18.534249985 podStartE2EDuration="18.534249985s" podCreationTimestamp="2026-03-18 10:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:11:54.454331782 +0000 UTC m=+78.507077741" watchObservedRunningTime="2026-03-18 10:12:10.534249985 +0000 UTC m=+94.586995934" Mar 18 10:12:10.536142 master-0 kubenswrapper[30420]: I0318 10:12:10.534969 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f537225-8565-4515-bbee-3f92c99e0ac0-kube-api-access\") pod \"3f537225-8565-4515-bbee-3f92c99e0ac0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " Mar 18 10:12:10.536142 master-0 kubenswrapper[30420]: I0318 10:12:10.535039 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-var-lock\") pod \"3f537225-8565-4515-bbee-3f92c99e0ac0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " Mar 18 10:12:10.536142 master-0 kubenswrapper[30420]: I0318 10:12:10.535075 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-kubelet-dir\") pod \"3f537225-8565-4515-bbee-3f92c99e0ac0\" (UID: \"3f537225-8565-4515-bbee-3f92c99e0ac0\") " Mar 18 10:12:10.536142 master-0 kubenswrapper[30420]: I0318 10:12:10.535365 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f537225-8565-4515-bbee-3f92c99e0ac0" (UID: "3f537225-8565-4515-bbee-3f92c99e0ac0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:12:10.536142 master-0 kubenswrapper[30420]: I0318 10:12:10.535900 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-var-lock" (OuterVolumeSpecName: "var-lock") pod "3f537225-8565-4515-bbee-3f92c99e0ac0" (UID: "3f537225-8565-4515-bbee-3f92c99e0ac0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:12:10.538707 master-0 kubenswrapper[30420]: I0318 10:12:10.538664 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f537225-8565-4515-bbee-3f92c99e0ac0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f537225-8565-4515-bbee-3f92c99e0ac0" (UID: "3f537225-8565-4515-bbee-3f92c99e0ac0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:12:10.635900 master-0 kubenswrapper[30420]: I0318 10:12:10.635797 30420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:12:10.635900 master-0 kubenswrapper[30420]: I0318 10:12:10.635863 30420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f537225-8565-4515-bbee-3f92c99e0ac0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:12:10.635900 master-0 kubenswrapper[30420]: I0318 10:12:10.635878 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f537225-8565-4515-bbee-3f92c99e0ac0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:12:10.880869 master-0 kubenswrapper[30420]: I0318 10:12:10.879984 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 10:12:10.885508 master-0 kubenswrapper[30420]: I0318 10:12:10.885442 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 10:12:12.177281 master-0 kubenswrapper[30420]: I0318 10:12:12.177217 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f537225-8565-4515-bbee-3f92c99e0ac0" path="/var/lib/kubelet/pods/3f537225-8565-4515-bbee-3f92c99e0ac0/volumes" Mar 18 10:12:51.716404 master-0 kubenswrapper[30420]: I0318 10:12:51.716312 30420 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 10:12:51.717413 master-0 kubenswrapper[30420]: E0318 10:12:51.716854 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f537225-8565-4515-bbee-3f92c99e0ac0" containerName="installer" Mar 18 10:12:51.717413 master-0 kubenswrapper[30420]: I0318 10:12:51.716887 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f537225-8565-4515-bbee-3f92c99e0ac0" containerName="installer" Mar 18 10:12:51.717413 master-0 kubenswrapper[30420]: I0318 10:12:51.717153 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f537225-8565-4515-bbee-3f92c99e0ac0" containerName="installer" Mar 18 10:12:51.718082 master-0 kubenswrapper[30420]: I0318 10:12:51.718042 30420 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 10:12:51.718281 master-0 kubenswrapper[30420]: I0318 10:12:51.718241 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.719072 master-0 kubenswrapper[30420]: I0318 10:12:51.718595 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" containerID="cri-o://ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3" gracePeriod=15 Mar 18 10:12:51.719072 master-0 kubenswrapper[30420]: I0318 10:12:51.718644 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c" gracePeriod=15 Mar 18 10:12:51.719072 master-0 kubenswrapper[30420]: I0318 10:12:51.718732 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" containerID="cri-o://2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d" gracePeriod=15 Mar 18 10:12:51.719072 master-0 kubenswrapper[30420]: I0318 10:12:51.718805 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640" gracePeriod=15 Mar 18 10:12:51.719072 master-0 kubenswrapper[30420]: I0318 10:12:51.718805 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" containerID="cri-o://16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb" gracePeriod=15 Mar 18 10:12:51.721386 master-0 kubenswrapper[30420]: I0318 10:12:51.720814 30420 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 10:12:51.721386 master-0 kubenswrapper[30420]: E0318 10:12:51.721252 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 10:12:51.721386 master-0 kubenswrapper[30420]: I0318 10:12:51.721284 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 10:12:51.721386 master-0 kubenswrapper[30420]: E0318 10:12:51.721316 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 18 10:12:51.721386 master-0 kubenswrapper[30420]: I0318 10:12:51.721330 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 18 10:12:51.721386 master-0 kubenswrapper[30420]: E0318 10:12:51.721377 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 10:12:51.721386 master-0 kubenswrapper[30420]: I0318 10:12:51.721392 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: E0318 10:12:51.721412 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721425 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: E0318 10:12:51.721456 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721473 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: E0318 10:12:51.721496 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721515 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: E0318 10:12:51.721539 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721556 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721785 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721812 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721873 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721894 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721915 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 18 10:12:51.722196 master-0 kubenswrapper[30420]: I0318 10:12:51.721943 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 10:12:51.750422 master-0 kubenswrapper[30420]: I0318 10:12:51.750073 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.750422 master-0 kubenswrapper[30420]: I0318 10:12:51.750210 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.750651 master-0 kubenswrapper[30420]: I0318 10:12:51.750467 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.750651 master-0 kubenswrapper[30420]: I0318 10:12:51.750559 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.750818 master-0 kubenswrapper[30420]: I0318 10:12:51.750649 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:51.750818 master-0 kubenswrapper[30420]: I0318 10:12:51.750785 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:51.750935 master-0 kubenswrapper[30420]: I0318 10:12:51.750877 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:51.750979 master-0 kubenswrapper[30420]: I0318 10:12:51.750940 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.832620 master-0 kubenswrapper[30420]: E0318 10:12:51.831226 30420 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.851593 master-0 kubenswrapper[30420]: I0318 10:12:51.851543 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.851774 master-0 kubenswrapper[30420]: I0318 10:12:51.851619 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.851774 master-0 kubenswrapper[30420]: I0318 10:12:51.851666 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.851774 master-0 kubenswrapper[30420]: I0318 10:12:51.851711 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.851774 master-0 kubenswrapper[30420]: I0318 10:12:51.851763 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:51.852385 master-0 kubenswrapper[30420]: I0318 10:12:51.852117 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.852385 master-0 kubenswrapper[30420]: I0318 10:12:51.852117 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.852385 master-0 kubenswrapper[30420]: I0318 10:12:51.852168 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:51.852385 master-0 kubenswrapper[30420]: I0318 10:12:51.852226 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.852385 master-0 kubenswrapper[30420]: I0318 10:12:51.852249 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:51.852385 master-0 kubenswrapper[30420]: I0318 10:12:51.852274 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:51.852385 master-0 kubenswrapper[30420]: I0318 10:12:51.852310 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:51.852385 master-0 kubenswrapper[30420]: I0318 10:12:51.852310 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:51.852385 master-0 kubenswrapper[30420]: I0318 10:12:51.852335 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.853561 master-0 kubenswrapper[30420]: I0318 10:12:51.852442 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.853561 master-0 kubenswrapper[30420]: I0318 10:12:51.852528 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:51.901874 master-0 kubenswrapper[30420]: I0318 10:12:51.897018 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-check-endpoints/0.log" Mar 18 10:12:51.901874 master-0 kubenswrapper[30420]: I0318 10:12:51.898607 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 10:12:51.901874 master-0 kubenswrapper[30420]: I0318 10:12:51.899222 30420 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d" exitCode=0 Mar 18 10:12:51.901874 master-0 kubenswrapper[30420]: I0318 10:12:51.899245 30420 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c" exitCode=0 Mar 18 10:12:51.901874 master-0 kubenswrapper[30420]: I0318 10:12:51.899254 30420 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640" exitCode=0 Mar 18 10:12:51.901874 master-0 kubenswrapper[30420]: I0318 10:12:51.899263 30420 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb" exitCode=2 Mar 18 10:12:51.901874 master-0 kubenswrapper[30420]: I0318 10:12:51.899304 30420 scope.go:117] "RemoveContainer" containerID="ceb0752eea3da310ec4f97706cc49b9e5802cdc6a08264ab2c0725b45c7967d0" Mar 18 10:12:51.903462 master-0 kubenswrapper[30420]: I0318 10:12:51.903429 30420 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Mar 18 10:12:51.903462 master-0 kubenswrapper[30420]: I0318 10:12:51.903482 30420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:51.907960 master-0 kubenswrapper[30420]: E0318 10:12:51.905080 30420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Mar 18 10:12:51.907960 master-0 kubenswrapper[30420]: &Event{ObjectMeta:{kube-apiserver-master-0.189de7deb1ac6a60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:7d5ce05b3d592e63f1f92202d52b9635,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 10:12:51.907960 master-0 kubenswrapper[30420]: body: Mar 18 10:12:51.907960 master-0 kubenswrapper[30420]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 10:12:51.90346608 +0000 UTC m=+135.956212019,LastTimestamp:2026-03-18 10:12:51.90346608 +0000 UTC m=+135.956212019,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 18 10:12:51.907960 master-0 kubenswrapper[30420]: > Mar 18 10:12:52.132749 master-0 kubenswrapper[30420]: I0318 10:12:52.132692 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:52.165597 master-0 kubenswrapper[30420]: W0318 10:12:52.165533 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbfbf2b56df0323ba118d68bfdad8b9.slice/crio-68a2636f554e2ba2d648324fd8a0918146d71a9e9db6d1953b6e2a6b0bcc34c6 WatchSource:0}: Error finding container 68a2636f554e2ba2d648324fd8a0918146d71a9e9db6d1953b6e2a6b0bcc34c6: Status 404 returned error can't find the container with id 68a2636f554e2ba2d648324fd8a0918146d71a9e9db6d1953b6e2a6b0bcc34c6 Mar 18 10:12:52.915327 master-0 kubenswrapper[30420]: I0318 10:12:52.915248 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 10:12:52.920180 master-0 kubenswrapper[30420]: I0318 10:12:52.920064 30420 generic.go:334] "Generic (PLEG): container finished" podID="840c140c-d526-45b2-8c25-9df4c4efd602" containerID="38067aff857f4b1ac037294440586bce5f3c16951e6528567513c3e8b2cfd90d" exitCode=0 Mar 18 10:12:52.920180 master-0 kubenswrapper[30420]: I0318 10:12:52.920135 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"840c140c-d526-45b2-8c25-9df4c4efd602","Type":"ContainerDied","Data":"38067aff857f4b1ac037294440586bce5f3c16951e6528567513c3e8b2cfd90d"} Mar 18 10:12:52.922219 master-0 kubenswrapper[30420]: I0318 10:12:52.922041 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:52.923208 master-0 kubenswrapper[30420]: I0318 10:12:52.923119 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"5a61a2c573738e71fed75545f604a081729d6be677df07f48a9700a49bbc8e27"} Mar 18 10:12:52.923208 master-0 kubenswrapper[30420]: I0318 10:12:52.923186 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"68a2636f554e2ba2d648324fd8a0918146d71a9e9db6d1953b6e2a6b0bcc34c6"} Mar 18 10:12:52.924281 master-0 kubenswrapper[30420]: I0318 10:12:52.924215 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:52.924281 master-0 kubenswrapper[30420]: E0318 10:12:52.924219 30420 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:12:54.285302 master-0 kubenswrapper[30420]: I0318 10:12:54.285229 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:12:54.286214 master-0 kubenswrapper[30420]: I0318 10:12:54.286163 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:54.388523 master-0 kubenswrapper[30420]: I0318 10:12:54.388407 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-kubelet-dir\") pod \"840c140c-d526-45b2-8c25-9df4c4efd602\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " Mar 18 10:12:54.388523 master-0 kubenswrapper[30420]: I0318 10:12:54.388535 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/840c140c-d526-45b2-8c25-9df4c4efd602-kube-api-access\") pod \"840c140c-d526-45b2-8c25-9df4c4efd602\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " Mar 18 10:12:54.389008 master-0 kubenswrapper[30420]: I0318 10:12:54.388537 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "840c140c-d526-45b2-8c25-9df4c4efd602" (UID: "840c140c-d526-45b2-8c25-9df4c4efd602"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:12:54.389008 master-0 kubenswrapper[30420]: I0318 10:12:54.388617 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-var-lock\") pod \"840c140c-d526-45b2-8c25-9df4c4efd602\" (UID: \"840c140c-d526-45b2-8c25-9df4c4efd602\") " Mar 18 10:12:54.389008 master-0 kubenswrapper[30420]: I0318 10:12:54.388974 30420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:12:54.389274 master-0 kubenswrapper[30420]: I0318 10:12:54.389045 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-var-lock" (OuterVolumeSpecName: "var-lock") pod "840c140c-d526-45b2-8c25-9df4c4efd602" (UID: "840c140c-d526-45b2-8c25-9df4c4efd602"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:12:54.391911 master-0 kubenswrapper[30420]: I0318 10:12:54.391819 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840c140c-d526-45b2-8c25-9df4c4efd602-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "840c140c-d526-45b2-8c25-9df4c4efd602" (UID: "840c140c-d526-45b2-8c25-9df4c4efd602"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:12:54.490578 master-0 kubenswrapper[30420]: I0318 10:12:54.490503 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/840c140c-d526-45b2-8c25-9df4c4efd602-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:12:54.490578 master-0 kubenswrapper[30420]: I0318 10:12:54.490567 30420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/840c140c-d526-45b2-8c25-9df4c4efd602-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:12:54.610119 master-0 kubenswrapper[30420]: I0318 10:12:54.610041 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 10:12:54.611683 master-0 kubenswrapper[30420]: I0318 10:12:54.611628 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:54.613191 master-0 kubenswrapper[30420]: I0318 10:12:54.613121 30420 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:54.614159 master-0 kubenswrapper[30420]: I0318 10:12:54.614080 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:54.797005 master-0 kubenswrapper[30420]: I0318 10:12:54.796945 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 18 10:12:54.797476 master-0 kubenswrapper[30420]: I0318 10:12:54.797439 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 18 10:12:54.797748 master-0 kubenswrapper[30420]: I0318 10:12:54.797163 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:12:54.797748 master-0 kubenswrapper[30420]: I0318 10:12:54.797479 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:12:54.798023 master-0 kubenswrapper[30420]: I0318 10:12:54.797711 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 18 10:12:54.798164 master-0 kubenswrapper[30420]: I0318 10:12:54.798126 30420 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:12:54.798164 master-0 kubenswrapper[30420]: I0318 10:12:54.798152 30420 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:12:54.798520 master-0 kubenswrapper[30420]: I0318 10:12:54.798475 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:12:54.898972 master-0 kubenswrapper[30420]: I0318 10:12:54.898821 30420 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:12:54.956341 master-0 kubenswrapper[30420]: I0318 10:12:54.956299 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 10:12:54.957476 master-0 kubenswrapper[30420]: I0318 10:12:54.957443 30420 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3" exitCode=0 Mar 18 10:12:54.957598 master-0 kubenswrapper[30420]: I0318 10:12:54.957530 30420 scope.go:117] "RemoveContainer" containerID="2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d" Mar 18 10:12:54.957654 master-0 kubenswrapper[30420]: I0318 10:12:54.957534 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:12:54.961438 master-0 kubenswrapper[30420]: I0318 10:12:54.961338 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"840c140c-d526-45b2-8c25-9df4c4efd602","Type":"ContainerDied","Data":"3f7c45156a693c0895ff1304114c7f881c174f64f6da0077b924719fad9f9385"} Mar 18 10:12:54.961438 master-0 kubenswrapper[30420]: I0318 10:12:54.961380 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f7c45156a693c0895ff1304114c7f881c174f64f6da0077b924719fad9f9385" Mar 18 10:12:54.961588 master-0 kubenswrapper[30420]: I0318 10:12:54.961442 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 10:12:54.975717 master-0 kubenswrapper[30420]: I0318 10:12:54.975604 30420 scope.go:117] "RemoveContainer" containerID="86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c" Mar 18 10:12:54.980541 master-0 kubenswrapper[30420]: I0318 10:12:54.980463 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:54.980971 master-0 kubenswrapper[30420]: I0318 10:12:54.980943 30420 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:54.995840 master-0 kubenswrapper[30420]: I0318 10:12:54.995730 30420 scope.go:117] "RemoveContainer" containerID="668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640" Mar 18 10:12:55.000262 master-0 kubenswrapper[30420]: I0318 10:12:55.000093 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:55.001199 master-0 kubenswrapper[30420]: I0318 10:12:55.001149 30420 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:55.015954 master-0 kubenswrapper[30420]: I0318 10:12:55.015763 30420 scope.go:117] "RemoveContainer" containerID="16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb" Mar 18 10:12:55.037096 master-0 kubenswrapper[30420]: I0318 10:12:55.036893 30420 scope.go:117] "RemoveContainer" containerID="ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3" Mar 18 10:12:55.057490 master-0 kubenswrapper[30420]: I0318 10:12:55.057428 30420 scope.go:117] "RemoveContainer" containerID="51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289" Mar 18 10:12:55.077268 master-0 kubenswrapper[30420]: I0318 10:12:55.077214 30420 scope.go:117] "RemoveContainer" containerID="2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d" Mar 18 10:12:55.077784 master-0 kubenswrapper[30420]: E0318 10:12:55.077744 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d\": container with ID starting with 2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d not found: ID does not exist" containerID="2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d" Mar 18 10:12:55.077871 master-0 kubenswrapper[30420]: I0318 10:12:55.077775 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d"} err="failed to get container status \"2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d\": rpc error: code = NotFound desc = could not find container \"2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d\": container with ID starting with 2e73e17f2dee64b431d0ee9faadf4eec1b11e6de7060e811ff51b2fd90ed860d not found: ID does not exist" Mar 18 10:12:55.077871 master-0 kubenswrapper[30420]: I0318 10:12:55.077855 30420 scope.go:117] "RemoveContainer" containerID="86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c" Mar 18 10:12:55.078170 master-0 kubenswrapper[30420]: E0318 10:12:55.078113 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c\": container with ID starting with 86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c not found: ID does not exist" containerID="86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c" Mar 18 10:12:55.078170 master-0 kubenswrapper[30420]: I0318 10:12:55.078155 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c"} err="failed to get container status \"86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c\": rpc error: code = NotFound desc = could not find container \"86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c\": container with ID starting with 86830c5df134908b2f32b76c37923b88bd5a444c6f59be882124eea01b53a83c not found: ID does not exist" Mar 18 10:12:55.078170 master-0 kubenswrapper[30420]: I0318 10:12:55.078169 30420 scope.go:117] "RemoveContainer" containerID="668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640" Mar 18 10:12:55.078442 master-0 kubenswrapper[30420]: E0318 10:12:55.078411 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640\": container with ID starting with 668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640 not found: ID does not exist" containerID="668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640" Mar 18 10:12:55.078501 master-0 kubenswrapper[30420]: I0318 10:12:55.078455 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640"} err="failed to get container status \"668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640\": rpc error: code = NotFound desc = could not find container \"668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640\": container with ID starting with 668a49f34398024ea5898641cb8be9421d28998b9871029b38ab0fb8fddfb640 not found: ID does not exist" Mar 18 10:12:55.078501 master-0 kubenswrapper[30420]: I0318 10:12:55.078469 30420 scope.go:117] "RemoveContainer" containerID="16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb" Mar 18 10:12:55.078751 master-0 kubenswrapper[30420]: E0318 10:12:55.078724 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb\": container with ID starting with 16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb not found: ID does not exist" containerID="16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb" Mar 18 10:12:55.078808 master-0 kubenswrapper[30420]: I0318 10:12:55.078755 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb"} err="failed to get container status \"16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb\": rpc error: code = NotFound desc = could not find container \"16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb\": container with ID starting with 16fdc2ec4e971dcc798c97b8e7872ebfb3c5f1297f42468d5129028ea0f9e0fb not found: ID does not exist" Mar 18 10:12:55.078808 master-0 kubenswrapper[30420]: I0318 10:12:55.078770 30420 scope.go:117] "RemoveContainer" containerID="ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3" Mar 18 10:12:55.079811 master-0 kubenswrapper[30420]: E0318 10:12:55.079746 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3\": container with ID starting with ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3 not found: ID does not exist" containerID="ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3" Mar 18 10:12:55.079927 master-0 kubenswrapper[30420]: I0318 10:12:55.079840 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3"} err="failed to get container status \"ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3\": rpc error: code = NotFound desc = could not find container \"ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3\": container with ID starting with ded34cf4fdd9b643ef1d1a0098bfad469ce8ef5239dc4118284afcfde2e248f3 not found: ID does not exist" Mar 18 10:12:55.079927 master-0 kubenswrapper[30420]: I0318 10:12:55.079862 30420 scope.go:117] "RemoveContainer" containerID="51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289" Mar 18 10:12:55.080232 master-0 kubenswrapper[30420]: E0318 10:12:55.080183 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289\": container with ID starting with 51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289 not found: ID does not exist" containerID="51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289" Mar 18 10:12:55.080232 master-0 kubenswrapper[30420]: I0318 10:12:55.080203 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289"} err="failed to get container status \"51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289\": rpc error: code = NotFound desc = could not find container \"51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289\": container with ID starting with 51b7edc7a7043b6e7a110520107ff6d77f1544d8cbe4bac90f24bd4ae0e3e289 not found: ID does not exist" Mar 18 10:12:55.341494 master-0 kubenswrapper[30420]: E0318 10:12:55.341435 30420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:55.342279 master-0 kubenswrapper[30420]: E0318 10:12:55.342202 30420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:55.343296 master-0 kubenswrapper[30420]: E0318 10:12:55.343251 30420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:55.343881 master-0 kubenswrapper[30420]: E0318 10:12:55.343818 30420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:55.344445 master-0 kubenswrapper[30420]: E0318 10:12:55.344392 30420 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:55.344488 master-0 kubenswrapper[30420]: I0318 10:12:55.344455 30420 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 10:12:55.345035 master-0 kubenswrapper[30420]: E0318 10:12:55.344993 30420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 10:12:55.547025 master-0 kubenswrapper[30420]: E0318 10:12:55.546961 30420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 10:12:55.948855 master-0 kubenswrapper[30420]: E0318 10:12:55.948769 30420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 10:12:56.176685 master-0 kubenswrapper[30420]: I0318 10:12:56.176588 30420 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:56.177990 master-0 kubenswrapper[30420]: I0318 10:12:56.177922 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:12:56.184237 master-0 kubenswrapper[30420]: I0318 10:12:56.184157 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5ce05b3d592e63f1f92202d52b9635" path="/var/lib/kubelet/pods/7d5ce05b3d592e63f1f92202d52b9635/volumes" Mar 18 10:12:56.750028 master-0 kubenswrapper[30420]: E0318 10:12:56.749944 30420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 10:12:58.261197 master-0 kubenswrapper[30420]: E0318 10:12:58.261045 30420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Mar 18 10:12:58.261197 master-0 kubenswrapper[30420]: &Event{ObjectMeta:{kube-apiserver-master-0.189de7deb1ac6a60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:7d5ce05b3d592e63f1f92202d52b9635,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 10:12:58.261197 master-0 kubenswrapper[30420]: body: Mar 18 10:12:58.261197 master-0 kubenswrapper[30420]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 10:12:51.90346608 +0000 UTC m=+135.956212019,LastTimestamp:2026-03-18 10:12:51.90346608 +0000 UTC m=+135.956212019,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 18 10:12:58.261197 master-0 kubenswrapper[30420]: > Mar 18 10:12:58.351490 master-0 kubenswrapper[30420]: E0318 10:12:58.351411 30420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 10:13:01.553034 master-0 kubenswrapper[30420]: E0318 10:13:01.552924 30420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 10:13:04.167127 master-0 kubenswrapper[30420]: I0318 10:13:04.167055 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:04.168757 master-0 kubenswrapper[30420]: I0318 10:13:04.168640 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:13:04.194267 master-0 kubenswrapper[30420]: I0318 10:13:04.194217 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:04.194267 master-0 kubenswrapper[30420]: I0318 10:13:04.194253 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:04.194888 master-0 kubenswrapper[30420]: E0318 10:13:04.194847 30420 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:04.195383 master-0 kubenswrapper[30420]: I0318 10:13:04.195349 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:04.229321 master-0 kubenswrapper[30420]: W0318 10:13:04.229066 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod274c4bebf95a655851b2cf276fe43ef7.slice/crio-5c0248a83c017039dbd421698182276b94c405d33b59e46977ede3bc2bfbd647 WatchSource:0}: Error finding container 5c0248a83c017039dbd421698182276b94c405d33b59e46977ede3bc2bfbd647: Status 404 returned error can't find the container with id 5c0248a83c017039dbd421698182276b94c405d33b59e46977ede3bc2bfbd647 Mar 18 10:13:05.052841 master-0 kubenswrapper[30420]: I0318 10:13:05.052738 30420 generic.go:334] "Generic (PLEG): container finished" podID="274c4bebf95a655851b2cf276fe43ef7" containerID="e058165b4384d63634854d84c58f312d99f959b099b878a955a750b869af883c" exitCode=0 Mar 18 10:13:05.053109 master-0 kubenswrapper[30420]: I0318 10:13:05.052915 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerDied","Data":"e058165b4384d63634854d84c58f312d99f959b099b878a955a750b869af883c"} Mar 18 10:13:05.053109 master-0 kubenswrapper[30420]: I0318 10:13:05.052994 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"5c0248a83c017039dbd421698182276b94c405d33b59e46977ede3bc2bfbd647"} Mar 18 10:13:05.053481 master-0 kubenswrapper[30420]: I0318 10:13:05.053446 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:05.053553 master-0 kubenswrapper[30420]: I0318 10:13:05.053486 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:05.055115 master-0 kubenswrapper[30420]: E0318 10:13:05.054912 30420 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:05.055115 master-0 kubenswrapper[30420]: I0318 10:13:05.055023 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:13:05.060445 master-0 kubenswrapper[30420]: I0318 10:13:05.060388 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager/0.log" Mar 18 10:13:05.060558 master-0 kubenswrapper[30420]: I0318 10:13:05.060500 30420 generic.go:334] "Generic (PLEG): container finished" podID="3ddfa5bb627414042dcc2d2204092c5a" containerID="fdfbe791c7dc81669c0055767b2119c9a2cf184b178248ae50fb983ef7ccd9a8" exitCode=1 Mar 18 10:13:05.060609 master-0 kubenswrapper[30420]: I0318 10:13:05.060558 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerDied","Data":"fdfbe791c7dc81669c0055767b2119c9a2cf184b178248ae50fb983ef7ccd9a8"} Mar 18 10:13:05.061427 master-0 kubenswrapper[30420]: I0318 10:13:05.061400 30420 scope.go:117] "RemoveContainer" containerID="fdfbe791c7dc81669c0055767b2119c9a2cf184b178248ae50fb983ef7ccd9a8" Mar 18 10:13:05.061795 master-0 kubenswrapper[30420]: I0318 10:13:05.061744 30420 status_manager.go:851] "Failed to get status for pod" podUID="3ddfa5bb627414042dcc2d2204092c5a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:13:05.062808 master-0 kubenswrapper[30420]: I0318 10:13:05.062739 30420 status_manager.go:851] "Failed to get status for pod" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 10:13:05.603450 master-0 kubenswrapper[30420]: I0318 10:13:05.603354 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:13:06.102848 master-0 kubenswrapper[30420]: I0318 10:13:06.101333 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager/0.log" Mar 18 10:13:06.102848 master-0 kubenswrapper[30420]: I0318 10:13:06.102565 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f"} Mar 18 10:13:06.115849 master-0 kubenswrapper[30420]: I0318 10:13:06.113639 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"73bbf85da98b11abc1e3f7fb0309859d15d85b19985fb44d5157ae38ead76417"} Mar 18 10:13:06.115849 master-0 kubenswrapper[30420]: I0318 10:13:06.113690 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"7c78345e3cd6157afe789a1d51f308df6db2e06cde6449df31e565005d2ccc7f"} Mar 18 10:13:06.115849 master-0 kubenswrapper[30420]: I0318 10:13:06.113702 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"5aa90ed2170264c29fda815cfeb6cc888b95858808e2dd1ac9efbb6c594ab073"} Mar 18 10:13:07.126571 master-0 kubenswrapper[30420]: I0318 10:13:07.126492 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"2680b4751d426289684b52acc68c509cc02be48e2bc69565957eef161d77a525"} Mar 18 10:13:07.126571 master-0 kubenswrapper[30420]: I0318 10:13:07.126560 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"63316c8b9878da97aeafcfa52fd260c71baa0f783e8eaf873954dff67a5ff51d"} Mar 18 10:13:07.127147 master-0 kubenswrapper[30420]: I0318 10:13:07.126881 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:07.127147 master-0 kubenswrapper[30420]: I0318 10:13:07.126909 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:09.196622 master-0 kubenswrapper[30420]: I0318 10:13:09.196523 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:09.197645 master-0 kubenswrapper[30420]: I0318 10:13:09.196686 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:09.203422 master-0 kubenswrapper[30420]: I0318 10:13:09.203364 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:12.137613 master-0 kubenswrapper[30420]: I0318 10:13:12.137546 30420 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:12.166932 master-0 kubenswrapper[30420]: I0318 10:13:12.166782 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:12.167241 master-0 kubenswrapper[30420]: I0318 10:13:12.167214 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:12.167695 master-0 kubenswrapper[30420]: I0318 10:13:12.167636 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:12.181366 master-0 kubenswrapper[30420]: I0318 10:13:12.181306 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:12.183851 master-0 kubenswrapper[30420]: I0318 10:13:12.183790 30420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="ab48b8ed-f911-4a7e-8e1f-f5ee66843f2d" Mar 18 10:13:12.583378 master-0 kubenswrapper[30420]: I0318 10:13:12.583306 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:13:12.583618 master-0 kubenswrapper[30420]: I0318 10:13:12.583475 30420 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 10:13:12.583618 master-0 kubenswrapper[30420]: I0318 10:13:12.583542 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 10:13:13.175383 master-0 kubenswrapper[30420]: I0318 10:13:13.175321 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:13.175383 master-0 kubenswrapper[30420]: I0318 10:13:13.175363 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:14.181385 master-0 kubenswrapper[30420]: I0318 10:13:14.181324 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:14.181385 master-0 kubenswrapper[30420]: I0318 10:13:14.181360 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:15.603234 master-0 kubenswrapper[30420]: I0318 10:13:15.603127 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:13:16.194630 master-0 kubenswrapper[30420]: I0318 10:13:16.194540 30420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="ab48b8ed-f911-4a7e-8e1f-f5ee66843f2d" Mar 18 10:13:21.963550 master-0 kubenswrapper[30420]: I0318 10:13:21.963443 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 10:13:22.070491 master-0 kubenswrapper[30420]: I0318 10:13:22.070384 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 10:13:22.581678 master-0 kubenswrapper[30420]: I0318 10:13:22.581597 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 10:13:22.583452 master-0 kubenswrapper[30420]: I0318 10:13:22.583397 30420 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 10:13:22.583567 master-0 kubenswrapper[30420]: I0318 10:13:22.583461 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 10:13:22.587692 master-0 kubenswrapper[30420]: I0318 10:13:22.587634 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 10:13:22.659678 master-0 kubenswrapper[30420]: I0318 10:13:22.659623 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 10:13:22.818653 master-0 kubenswrapper[30420]: I0318 10:13:22.818600 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 10:13:22.894848 master-0 kubenswrapper[30420]: I0318 10:13:22.894587 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bs6wb" Mar 18 10:13:23.067222 master-0 kubenswrapper[30420]: I0318 10:13:23.067165 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-84shv" Mar 18 10:13:23.478818 master-0 kubenswrapper[30420]: I0318 10:13:23.478755 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 10:13:23.648713 master-0 kubenswrapper[30420]: I0318 10:13:23.648654 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 10:13:23.714924 master-0 kubenswrapper[30420]: I0318 10:13:23.714867 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 10:13:23.825721 master-0 kubenswrapper[30420]: I0318 10:13:23.825679 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 10:13:23.886809 master-0 kubenswrapper[30420]: I0318 10:13:23.886750 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 10:13:24.129588 master-0 kubenswrapper[30420]: I0318 10:13:24.129435 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 10:13:24.307034 master-0 kubenswrapper[30420]: I0318 10:13:24.306949 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-gllg9" Mar 18 10:13:24.321508 master-0 kubenswrapper[30420]: I0318 10:13:24.321448 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 10:13:24.345766 master-0 kubenswrapper[30420]: I0318 10:13:24.345682 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-8wt5h" Mar 18 10:13:24.415710 master-0 kubenswrapper[30420]: I0318 10:13:24.415554 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 10:13:24.422614 master-0 kubenswrapper[30420]: I0318 10:13:24.422561 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 10:13:24.557712 master-0 kubenswrapper[30420]: I0318 10:13:24.557614 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 10:13:24.723144 master-0 kubenswrapper[30420]: I0318 10:13:24.722951 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 10:13:24.739374 master-0 kubenswrapper[30420]: I0318 10:13:24.739256 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 10:13:24.993389 master-0 kubenswrapper[30420]: I0318 10:13:24.993293 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 10:13:25.034193 master-0 kubenswrapper[30420]: I0318 10:13:25.034113 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 10:13:25.122211 master-0 kubenswrapper[30420]: I0318 10:13:25.122141 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 10:13:25.219623 master-0 kubenswrapper[30420]: I0318 10:13:25.219564 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 10:13:25.284432 master-0 kubenswrapper[30420]: I0318 10:13:25.284269 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 10:13:25.296518 master-0 kubenswrapper[30420]: I0318 10:13:25.296455 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 10:13:25.575027 master-0 kubenswrapper[30420]: I0318 10:13:25.574948 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 10:13:25.743455 master-0 kubenswrapper[30420]: I0318 10:13:25.743342 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 10:13:25.809799 master-0 kubenswrapper[30420]: I0318 10:13:25.808068 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 10:13:25.839142 master-0 kubenswrapper[30420]: I0318 10:13:25.839001 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 10:13:25.889974 master-0 kubenswrapper[30420]: I0318 10:13:25.889915 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 10:13:25.968790 master-0 kubenswrapper[30420]: I0318 10:13:25.968692 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-c5mc5" Mar 18 10:13:26.073557 master-0 kubenswrapper[30420]: I0318 10:13:26.073464 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 10:13:26.170856 master-0 kubenswrapper[30420]: I0318 10:13:26.169657 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 10:13:26.205194 master-0 kubenswrapper[30420]: I0318 10:13:26.205100 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-g2rgj" Mar 18 10:13:26.226556 master-0 kubenswrapper[30420]: I0318 10:13:26.226458 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 10:13:26.570395 master-0 kubenswrapper[30420]: I0318 10:13:26.570334 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 10:13:26.773676 master-0 kubenswrapper[30420]: I0318 10:13:26.773589 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 10:13:26.807952 master-0 kubenswrapper[30420]: I0318 10:13:26.807884 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 10:13:26.846283 master-0 kubenswrapper[30420]: I0318 10:13:26.846133 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 10:13:26.874676 master-0 kubenswrapper[30420]: I0318 10:13:26.874601 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 10:13:26.900749 master-0 kubenswrapper[30420]: I0318 10:13:26.900662 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 10:13:26.923964 master-0 kubenswrapper[30420]: I0318 10:13:26.923893 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 10:13:26.947224 master-0 kubenswrapper[30420]: I0318 10:13:26.947151 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 10:13:27.014811 master-0 kubenswrapper[30420]: I0318 10:13:27.014730 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 10:13:27.025256 master-0 kubenswrapper[30420]: I0318 10:13:27.025170 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 10:13:27.090670 master-0 kubenswrapper[30420]: I0318 10:13:27.090562 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-cgqlv" Mar 18 10:13:27.313681 master-0 kubenswrapper[30420]: I0318 10:13:27.313620 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 10:13:27.354586 master-0 kubenswrapper[30420]: I0318 10:13:27.354471 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 10:13:27.413421 master-0 kubenswrapper[30420]: I0318 10:13:27.413364 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 10:13:27.461245 master-0 kubenswrapper[30420]: I0318 10:13:27.461158 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 10:13:27.488704 master-0 kubenswrapper[30420]: I0318 10:13:27.488444 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 10:13:27.494026 master-0 kubenswrapper[30420]: I0318 10:13:27.493964 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 10:13:27.498379 master-0 kubenswrapper[30420]: I0318 10:13:27.498301 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 10:13:27.514880 master-0 kubenswrapper[30420]: I0318 10:13:27.514322 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 10:13:27.673034 master-0 kubenswrapper[30420]: I0318 10:13:27.671847 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 10:13:27.673789 master-0 kubenswrapper[30420]: I0318 10:13:27.673732 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-4lcwf" Mar 18 10:13:27.700661 master-0 kubenswrapper[30420]: I0318 10:13:27.700613 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 10:13:27.766570 master-0 kubenswrapper[30420]: I0318 10:13:27.766509 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 10:13:27.839374 master-0 kubenswrapper[30420]: I0318 10:13:27.839296 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 10:13:27.865268 master-0 kubenswrapper[30420]: I0318 10:13:27.865204 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 10:13:27.937082 master-0 kubenswrapper[30420]: I0318 10:13:27.936937 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 10:13:28.282430 master-0 kubenswrapper[30420]: I0318 10:13:28.282337 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 10:13:28.393587 master-0 kubenswrapper[30420]: I0318 10:13:28.393337 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 10:13:28.408846 master-0 kubenswrapper[30420]: I0318 10:13:28.408737 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 10:13:28.446850 master-0 kubenswrapper[30420]: I0318 10:13:28.446763 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 10:13:28.452237 master-0 kubenswrapper[30420]: I0318 10:13:28.452199 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-t5rvh" Mar 18 10:13:28.537299 master-0 kubenswrapper[30420]: I0318 10:13:28.537229 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 10:13:28.562363 master-0 kubenswrapper[30420]: I0318 10:13:28.562264 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 10:13:28.722187 master-0 kubenswrapper[30420]: I0318 10:13:28.722133 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 10:13:28.752366 master-0 kubenswrapper[30420]: I0318 10:13:28.752304 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 10:13:28.765968 master-0 kubenswrapper[30420]: I0318 10:13:28.765893 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 10:13:28.820259 master-0 kubenswrapper[30420]: I0318 10:13:28.818614 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 10:13:28.853923 master-0 kubenswrapper[30420]: I0318 10:13:28.853804 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 10:13:28.886586 master-0 kubenswrapper[30420]: I0318 10:13:28.886519 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 10:13:28.940001 master-0 kubenswrapper[30420]: I0318 10:13:28.939937 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 10:13:28.971999 master-0 kubenswrapper[30420]: I0318 10:13:28.971913 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 10:13:29.007223 master-0 kubenswrapper[30420]: I0318 10:13:29.007156 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 10:13:29.026798 master-0 kubenswrapper[30420]: I0318 10:13:29.026710 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 10:13:29.044170 master-0 kubenswrapper[30420]: I0318 10:13:29.044124 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-glndn" Mar 18 10:13:29.190477 master-0 kubenswrapper[30420]: I0318 10:13:29.190249 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 10:13:29.225063 master-0 kubenswrapper[30420]: I0318 10:13:29.224983 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 10:13:29.229351 master-0 kubenswrapper[30420]: I0318 10:13:29.229282 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-fr2b8" Mar 18 10:13:29.274350 master-0 kubenswrapper[30420]: I0318 10:13:29.274282 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-ht56j" Mar 18 10:13:29.281814 master-0 kubenswrapper[30420]: I0318 10:13:29.281747 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 10:13:29.450302 master-0 kubenswrapper[30420]: I0318 10:13:29.450150 30420 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 10:13:29.451195 master-0 kubenswrapper[30420]: I0318 10:13:29.450778 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 10:13:29.457066 master-0 kubenswrapper[30420]: I0318 10:13:29.457008 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 10:13:29.464432 master-0 kubenswrapper[30420]: I0318 10:13:29.464377 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 10:13:29.478139 master-0 kubenswrapper[30420]: I0318 10:13:29.478082 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 10:13:29.585278 master-0 kubenswrapper[30420]: I0318 10:13:29.585190 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bncrc" Mar 18 10:13:29.711233 master-0 kubenswrapper[30420]: I0318 10:13:29.711004 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 10:13:29.719341 master-0 kubenswrapper[30420]: I0318 10:13:29.719256 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 10:13:29.821450 master-0 kubenswrapper[30420]: I0318 10:13:29.821356 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 10:13:29.853376 master-0 kubenswrapper[30420]: I0318 10:13:29.853280 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 10:13:29.884722 master-0 kubenswrapper[30420]: I0318 10:13:29.884600 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 10:13:29.903795 master-0 kubenswrapper[30420]: I0318 10:13:29.903455 30420 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 10:13:30.091748 master-0 kubenswrapper[30420]: I0318 10:13:30.091325 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 10:13:30.383627 master-0 kubenswrapper[30420]: I0318 10:13:30.383284 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 10:13:30.465407 master-0 kubenswrapper[30420]: I0318 10:13:30.465344 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 10:13:30.509572 master-0 kubenswrapper[30420]: I0318 10:13:30.509484 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 10:13:30.513488 master-0 kubenswrapper[30420]: I0318 10:13:30.513421 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 10:13:30.519737 master-0 kubenswrapper[30420]: I0318 10:13:30.519671 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 10:13:30.589182 master-0 kubenswrapper[30420]: I0318 10:13:30.588791 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 10:13:30.708597 master-0 kubenswrapper[30420]: I0318 10:13:30.708479 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 10:13:30.743977 master-0 kubenswrapper[30420]: I0318 10:13:30.743912 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 10:13:30.790006 master-0 kubenswrapper[30420]: I0318 10:13:30.789942 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 10:13:30.859476 master-0 kubenswrapper[30420]: I0318 10:13:30.859406 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 10:13:30.886802 master-0 kubenswrapper[30420]: I0318 10:13:30.886748 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 10:13:30.899051 master-0 kubenswrapper[30420]: I0318 10:13:30.898995 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 10:13:30.943098 master-0 kubenswrapper[30420]: I0318 10:13:30.943044 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 10:13:30.948968 master-0 kubenswrapper[30420]: I0318 10:13:30.948938 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 10:13:31.056586 master-0 kubenswrapper[30420]: I0318 10:13:31.056497 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-p9m8v" Mar 18 10:13:31.059033 master-0 kubenswrapper[30420]: I0318 10:13:31.058961 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 10:13:31.101012 master-0 kubenswrapper[30420]: I0318 10:13:31.100914 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 10:13:31.125027 master-0 kubenswrapper[30420]: I0318 10:13:31.124955 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 10:13:31.125267 master-0 kubenswrapper[30420]: I0318 10:13:31.125190 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-lrdkh" Mar 18 10:13:31.273476 master-0 kubenswrapper[30420]: I0318 10:13:31.273394 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 10:13:31.400192 master-0 kubenswrapper[30420]: I0318 10:13:31.400040 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 10:13:31.439264 master-0 kubenswrapper[30420]: I0318 10:13:31.439188 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 10:13:31.466139 master-0 kubenswrapper[30420]: I0318 10:13:31.466075 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 10:13:31.479938 master-0 kubenswrapper[30420]: I0318 10:13:31.479857 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 10:13:31.546632 master-0 kubenswrapper[30420]: I0318 10:13:31.546557 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 10:13:31.554890 master-0 kubenswrapper[30420]: I0318 10:13:31.554775 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 10:13:31.604186 master-0 kubenswrapper[30420]: I0318 10:13:31.604088 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 10:13:31.672033 master-0 kubenswrapper[30420]: I0318 10:13:31.671875 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 10:13:31.778475 master-0 kubenswrapper[30420]: I0318 10:13:31.778392 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 10:13:31.785401 master-0 kubenswrapper[30420]: I0318 10:13:31.785316 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2w9kv" Mar 18 10:13:31.791702 master-0 kubenswrapper[30420]: I0318 10:13:31.791643 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 10:13:31.864854 master-0 kubenswrapper[30420]: I0318 10:13:31.864794 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 10:13:31.920719 master-0 kubenswrapper[30420]: I0318 10:13:31.920630 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 10:13:31.948348 master-0 kubenswrapper[30420]: I0318 10:13:31.948190 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 10:13:32.012501 master-0 kubenswrapper[30420]: I0318 10:13:32.012429 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 10:13:32.043396 master-0 kubenswrapper[30420]: I0318 10:13:32.043326 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-945t9" Mar 18 10:13:32.114522 master-0 kubenswrapper[30420]: I0318 10:13:32.114430 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 10:13:32.131169 master-0 kubenswrapper[30420]: I0318 10:13:32.131115 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-gxfn6" Mar 18 10:13:32.143215 master-0 kubenswrapper[30420]: I0318 10:13:32.143147 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 10:13:32.208509 master-0 kubenswrapper[30420]: I0318 10:13:32.208322 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 10:13:32.217283 master-0 kubenswrapper[30420]: I0318 10:13:32.217228 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 10:13:32.255749 master-0 kubenswrapper[30420]: I0318 10:13:32.255649 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 10:13:32.270306 master-0 kubenswrapper[30420]: I0318 10:13:32.270232 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-t4btg" Mar 18 10:13:32.319748 master-0 kubenswrapper[30420]: I0318 10:13:32.319666 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 10:13:32.333203 master-0 kubenswrapper[30420]: I0318 10:13:32.333119 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 10:13:32.333203 master-0 kubenswrapper[30420]: I0318 10:13:32.333125 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 10:13:32.417360 master-0 kubenswrapper[30420]: I0318 10:13:32.417243 30420 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 10:13:32.470390 master-0 kubenswrapper[30420]: I0318 10:13:32.470093 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 10:13:32.479163 master-0 kubenswrapper[30420]: I0318 10:13:32.479089 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 10:13:32.483682 master-0 kubenswrapper[30420]: I0318 10:13:32.483654 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-8lfl6" Mar 18 10:13:32.583798 master-0 kubenswrapper[30420]: I0318 10:13:32.583718 30420 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 10:13:32.584059 master-0 kubenswrapper[30420]: I0318 10:13:32.583800 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 10:13:32.584059 master-0 kubenswrapper[30420]: I0318 10:13:32.583874 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:13:32.584586 master-0 kubenswrapper[30420]: I0318 10:13:32.584536 30420 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 10:13:32.584681 master-0 kubenswrapper[30420]: I0318 10:13:32.584663 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" containerID="cri-o://6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f" gracePeriod=30 Mar 18 10:13:32.589745 master-0 kubenswrapper[30420]: I0318 10:13:32.589692 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 10:13:32.610193 master-0 kubenswrapper[30420]: I0318 10:13:32.610107 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 10:13:32.749401 master-0 kubenswrapper[30420]: I0318 10:13:32.749155 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 10:13:32.769021 master-0 kubenswrapper[30420]: I0318 10:13:32.768939 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 10:13:32.864235 master-0 kubenswrapper[30420]: I0318 10:13:32.864155 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 10:13:32.991355 master-0 kubenswrapper[30420]: I0318 10:13:32.991277 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 10:13:33.000270 master-0 kubenswrapper[30420]: I0318 10:13:33.000100 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 10:13:33.031109 master-0 kubenswrapper[30420]: I0318 10:13:33.031037 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 10:13:33.038913 master-0 kubenswrapper[30420]: I0318 10:13:33.038804 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 10:13:33.093878 master-0 kubenswrapper[30420]: I0318 10:13:33.093756 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 10:13:33.114307 master-0 kubenswrapper[30420]: I0318 10:13:33.114218 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 10:13:33.167084 master-0 kubenswrapper[30420]: I0318 10:13:33.166980 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 10:13:33.195423 master-0 kubenswrapper[30420]: I0318 10:13:33.195246 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 10:13:33.293736 master-0 kubenswrapper[30420]: I0318 10:13:33.293630 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 10:13:33.299200 master-0 kubenswrapper[30420]: I0318 10:13:33.299130 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 10:13:33.378933 master-0 kubenswrapper[30420]: I0318 10:13:33.378814 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-czmbt" Mar 18 10:13:33.463119 master-0 kubenswrapper[30420]: I0318 10:13:33.463027 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 10:13:33.540659 master-0 kubenswrapper[30420]: I0318 10:13:33.540590 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 10:13:33.554203 master-0 kubenswrapper[30420]: I0318 10:13:33.554086 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 10:13:33.580970 master-0 kubenswrapper[30420]: I0318 10:13:33.580893 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-113q5nsjog6km" Mar 18 10:13:33.592926 master-0 kubenswrapper[30420]: I0318 10:13:33.592875 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 10:13:33.626957 master-0 kubenswrapper[30420]: I0318 10:13:33.626785 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 10:13:33.650393 master-0 kubenswrapper[30420]: I0318 10:13:33.650311 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 10:13:33.663796 master-0 kubenswrapper[30420]: I0318 10:13:33.663675 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 10:13:33.706815 master-0 kubenswrapper[30420]: I0318 10:13:33.706769 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 10:13:33.752138 master-0 kubenswrapper[30420]: I0318 10:13:33.752066 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 10:13:33.770102 master-0 kubenswrapper[30420]: I0318 10:13:33.770029 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 10:13:33.798865 master-0 kubenswrapper[30420]: I0318 10:13:33.798776 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 10:13:33.798865 master-0 kubenswrapper[30420]: I0318 10:13:33.798815 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 10:13:33.861062 master-0 kubenswrapper[30420]: I0318 10:13:33.860894 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 10:13:33.909654 master-0 kubenswrapper[30420]: I0318 10:13:33.909514 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 10:13:33.947662 master-0 kubenswrapper[30420]: I0318 10:13:33.947555 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 10:13:33.969439 master-0 kubenswrapper[30420]: I0318 10:13:33.969307 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 10:13:33.978592 master-0 kubenswrapper[30420]: I0318 10:13:33.978551 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 10:13:33.982543 master-0 kubenswrapper[30420]: I0318 10:13:33.982456 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-pvxkh" Mar 18 10:13:34.081019 master-0 kubenswrapper[30420]: I0318 10:13:34.080927 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 10:13:34.107019 master-0 kubenswrapper[30420]: I0318 10:13:34.106920 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-wv86q" Mar 18 10:13:34.112751 master-0 kubenswrapper[30420]: I0318 10:13:34.112612 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 10:13:34.115591 master-0 kubenswrapper[30420]: I0318 10:13:34.115543 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 10:13:34.178150 master-0 kubenswrapper[30420]: I0318 10:13:34.178075 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 10:13:34.389268 master-0 kubenswrapper[30420]: I0318 10:13:34.389050 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 10:13:34.397704 master-0 kubenswrapper[30420]: I0318 10:13:34.397637 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 10:13:34.414050 master-0 kubenswrapper[30420]: I0318 10:13:34.413979 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 10:13:34.430524 master-0 kubenswrapper[30420]: I0318 10:13:34.430473 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 10:13:34.518093 master-0 kubenswrapper[30420]: I0318 10:13:34.518039 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 10:13:34.561180 master-0 kubenswrapper[30420]: I0318 10:13:34.561082 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 10:13:34.707296 master-0 kubenswrapper[30420]: I0318 10:13:34.707095 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 10:13:34.790046 master-0 kubenswrapper[30420]: I0318 10:13:34.789977 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 10:13:34.905442 master-0 kubenswrapper[30420]: I0318 10:13:34.905340 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 10:13:34.922229 master-0 kubenswrapper[30420]: I0318 10:13:34.922162 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 10:13:34.991463 master-0 kubenswrapper[30420]: I0318 10:13:34.991286 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 10:13:35.026733 master-0 kubenswrapper[30420]: I0318 10:13:35.026634 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-mkddq" Mar 18 10:13:35.056629 master-0 kubenswrapper[30420]: I0318 10:13:35.056529 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 10:13:35.269040 master-0 kubenswrapper[30420]: I0318 10:13:35.268627 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 10:13:35.279127 master-0 kubenswrapper[30420]: I0318 10:13:35.279021 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-qcchq" Mar 18 10:13:35.354338 master-0 kubenswrapper[30420]: I0318 10:13:35.354282 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 10:13:35.539434 master-0 kubenswrapper[30420]: I0318 10:13:35.539376 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 10:13:35.587791 master-0 kubenswrapper[30420]: I0318 10:13:35.587689 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 10:13:35.614476 master-0 kubenswrapper[30420]: I0318 10:13:35.614405 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 10:13:35.784577 master-0 kubenswrapper[30420]: I0318 10:13:35.784478 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 10:13:36.075817 master-0 kubenswrapper[30420]: I0318 10:13:36.075684 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 10:13:36.087860 master-0 kubenswrapper[30420]: I0318 10:13:36.087753 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 10:13:36.173359 master-0 kubenswrapper[30420]: I0318 10:13:36.173295 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 10:13:36.185491 master-0 kubenswrapper[30420]: I0318 10:13:36.185380 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 10:13:36.218154 master-0 kubenswrapper[30420]: I0318 10:13:36.218062 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 10:13:36.233811 master-0 kubenswrapper[30420]: I0318 10:13:36.233702 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 10:13:36.307623 master-0 kubenswrapper[30420]: I0318 10:13:36.307555 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 10:13:36.346915 master-0 kubenswrapper[30420]: I0318 10:13:36.346710 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 10:13:36.423165 master-0 kubenswrapper[30420]: I0318 10:13:36.423109 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 10:13:36.428278 master-0 kubenswrapper[30420]: I0318 10:13:36.428231 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-vxxzb" Mar 18 10:13:36.441546 master-0 kubenswrapper[30420]: I0318 10:13:36.441492 30420 scope.go:117] "RemoveContainer" containerID="0f4bf1dfc4a190fd3410aa065645689966e325eb73cf7788b53ae0a9bf57f3cc" Mar 18 10:13:36.492335 master-0 kubenswrapper[30420]: I0318 10:13:36.492202 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 10:13:36.499579 master-0 kubenswrapper[30420]: I0318 10:13:36.499369 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 10:13:36.638918 master-0 kubenswrapper[30420]: I0318 10:13:36.638793 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 10:13:36.689270 master-0 kubenswrapper[30420]: I0318 10:13:36.689238 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 10:13:36.798107 master-0 kubenswrapper[30420]: I0318 10:13:36.798072 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 10:13:36.838951 master-0 kubenswrapper[30420]: I0318 10:13:36.838899 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-wqrrj" Mar 18 10:13:36.851256 master-0 kubenswrapper[30420]: I0318 10:13:36.851223 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-hvm64" Mar 18 10:13:36.901417 master-0 kubenswrapper[30420]: I0318 10:13:36.901264 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 10:13:36.937560 master-0 kubenswrapper[30420]: I0318 10:13:36.937464 30420 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 10:13:36.985298 master-0 kubenswrapper[30420]: I0318 10:13:36.984980 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 10:13:36.991899 master-0 kubenswrapper[30420]: I0318 10:13:36.990418 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 10:13:37.076894 master-0 kubenswrapper[30420]: I0318 10:13:37.076392 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 10:13:37.128461 master-0 kubenswrapper[30420]: I0318 10:13:37.128372 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 10:13:37.130142 master-0 kubenswrapper[30420]: I0318 10:13:37.130103 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 10:13:37.135019 master-0 kubenswrapper[30420]: I0318 10:13:37.134995 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 10:13:37.324965 master-0 kubenswrapper[30420]: I0318 10:13:37.324887 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 10:13:37.332355 master-0 kubenswrapper[30420]: I0318 10:13:37.332271 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 10:13:37.388600 master-0 kubenswrapper[30420]: I0318 10:13:37.388539 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 10:13:37.550632 master-0 kubenswrapper[30420]: I0318 10:13:37.550548 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 10:13:37.562374 master-0 kubenswrapper[30420]: I0318 10:13:37.562299 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 10:13:37.599214 master-0 kubenswrapper[30420]: I0318 10:13:37.598764 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 10:13:37.672358 master-0 kubenswrapper[30420]: I0318 10:13:37.672243 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 10:13:37.693326 master-0 kubenswrapper[30420]: I0318 10:13:37.693207 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 10:13:37.726692 master-0 kubenswrapper[30420]: I0318 10:13:37.726176 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 10:13:37.756744 master-0 kubenswrapper[30420]: I0318 10:13:37.756673 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 10:13:37.884988 master-0 kubenswrapper[30420]: I0318 10:13:37.884755 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 10:13:38.029590 master-0 kubenswrapper[30420]: I0318 10:13:38.029501 30420 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 10:13:38.042797 master-0 kubenswrapper[30420]: I0318 10:13:38.042668 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 10:13:38.042797 master-0 kubenswrapper[30420]: I0318 10:13:38.042797 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dqvx5","openshift-kube-apiserver/kube-apiserver-master-0","openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm"] Mar 18 10:13:38.043332 master-0 kubenswrapper[30420]: E0318 10:13:38.043277 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" containerName="installer" Mar 18 10:13:38.043432 master-0 kubenswrapper[30420]: I0318 10:13:38.043377 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" containerName="installer" Mar 18 10:13:38.043432 master-0 kubenswrapper[30420]: I0318 10:13:38.043397 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:38.043575 master-0 kubenswrapper[30420]: I0318 10:13:38.043440 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71cb30b5-8d30-4001-9c05-dac219430657" Mar 18 10:13:38.043772 master-0 kubenswrapper[30420]: I0318 10:13:38.043707 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="840c140c-d526-45b2-8c25-9df4c4efd602" containerName="installer" Mar 18 10:13:38.044940 master-0 kubenswrapper[30420]: I0318 10:13:38.044881 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.046460 master-0 kubenswrapper[30420]: I0318 10:13:38.046386 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.047906 master-0 kubenswrapper[30420]: I0318 10:13:38.047363 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-rchwt" Mar 18 10:13:38.048377 master-0 kubenswrapper[30420]: I0318 10:13:38.048179 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 18 10:13:38.053046 master-0 kubenswrapper[30420]: I0318 10:13:38.052976 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 10:13:38.053376 master-0 kubenswrapper[30420]: I0318 10:13:38.053300 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 10:13:38.053701 master-0 kubenswrapper[30420]: I0318 10:13:38.053570 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 10:13:38.053928 master-0 kubenswrapper[30420]: I0318 10:13:38.053773 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 10:13:38.054172 master-0 kubenswrapper[30420]: I0318 10:13:38.054022 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-dpzpw" Mar 18 10:13:38.055508 master-0 kubenswrapper[30420]: I0318 10:13:38.055348 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 10:13:38.055508 master-0 kubenswrapper[30420]: I0318 10:13:38.055499 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 10:13:38.055758 master-0 kubenswrapper[30420]: I0318 10:13:38.055544 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 10:13:38.056029 master-0 kubenswrapper[30420]: I0318 10:13:38.055985 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 10:13:38.056156 master-0 kubenswrapper[30420]: I0318 10:13:38.056098 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 10:13:38.056431 master-0 kubenswrapper[30420]: I0318 10:13:38.056385 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 10:13:38.056563 master-0 kubenswrapper[30420]: I0318 10:13:38.056499 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 10:13:38.056678 master-0 kubenswrapper[30420]: I0318 10:13:38.056596 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 10:13:38.066973 master-0 kubenswrapper[30420]: I0318 10:13:38.066907 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 10:13:38.076308 master-0 kubenswrapper[30420]: I0318 10:13:38.076252 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 10:13:38.107223 master-0 kubenswrapper[30420]: I0318 10:13:38.106990 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=26.106967395 podStartE2EDuration="26.106967395s" podCreationTimestamp="2026-03-18 10:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:13:38.102266167 +0000 UTC m=+182.155012106" watchObservedRunningTime="2026-03-18 10:13:38.106967395 +0000 UTC m=+182.159713334" Mar 18 10:13:38.155016 master-0 kubenswrapper[30420]: I0318 10:13:38.154770 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 10:13:38.197915 master-0 kubenswrapper[30420]: I0318 10:13:38.197742 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-session\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.198373 master-0 kubenswrapper[30420]: I0318 10:13:38.197933 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.198373 master-0 kubenswrapper[30420]: I0318 10:13:38.197986 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.198373 master-0 kubenswrapper[30420]: I0318 10:13:38.198190 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.198373 master-0 kubenswrapper[30420]: I0318 10:13:38.198366 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2hq\" (UniqueName: \"kubernetes.io/projected/edc60dd5-333f-44bc-bb10-f10673c59074-kube-api-access-ms2hq\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.198841 master-0 kubenswrapper[30420]: I0318 10:13:38.198460 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.198841 master-0 kubenswrapper[30420]: I0318 10:13:38.198515 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-audit-policies\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.198841 master-0 kubenswrapper[30420]: I0318 10:13:38.198652 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-login\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.198841 master-0 kubenswrapper[30420]: I0318 10:13:38.198732 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.198841 master-0 kubenswrapper[30420]: I0318 10:13:38.198772 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.199223 master-0 kubenswrapper[30420]: I0318 10:13:38.199036 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-ready\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.199491 master-0 kubenswrapper[30420]: I0318 10:13:38.199461 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-error\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.199635 master-0 kubenswrapper[30420]: I0318 10:13:38.199616 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.199986 master-0 kubenswrapper[30420]: I0318 10:13:38.199960 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.200248 master-0 kubenswrapper[30420]: I0318 10:13:38.200154 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.200497 master-0 kubenswrapper[30420]: I0318 10:13:38.200471 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/edc60dd5-333f-44bc-bb10-f10673c59074-audit-dir\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.200748 master-0 kubenswrapper[30420]: I0318 10:13:38.200713 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8szd\" (UniqueName: \"kubernetes.io/projected/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-kube-api-access-p8szd\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.301652 master-0 kubenswrapper[30420]: I0318 10:13:38.301595 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8szd\" (UniqueName: \"kubernetes.io/projected/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-kube-api-access-p8szd\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.302020 master-0 kubenswrapper[30420]: I0318 10:13:38.301997 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-session\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.302159 master-0 kubenswrapper[30420]: I0318 10:13:38.302136 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.302305 master-0 kubenswrapper[30420]: I0318 10:13:38.302260 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.302491 master-0 kubenswrapper[30420]: I0318 10:13:38.302451 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.302635 master-0 kubenswrapper[30420]: I0318 10:13:38.302617 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2hq\" (UniqueName: \"kubernetes.io/projected/edc60dd5-333f-44bc-bb10-f10673c59074-kube-api-access-ms2hq\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.302963 master-0 kubenswrapper[30420]: I0318 10:13:38.302881 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.306878 master-0 kubenswrapper[30420]: I0318 10:13:38.305858 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308362 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-audit-policies\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308404 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308466 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-login\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308509 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308536 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308559 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-ready\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308588 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-error\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308620 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308646 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308678 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308739 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/edc60dd5-333f-44bc-bb10-f10673c59074-audit-dir\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.308846 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/edc60dd5-333f-44bc-bb10-f10673c59074-audit-dir\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.309433 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.309928 master-0 kubenswrapper[30420]: I0318 10:13:38.309840 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.311594 master-0 kubenswrapper[30420]: I0318 10:13:38.311552 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-ready\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.312011 master-0 kubenswrapper[30420]: I0318 10:13:38.311958 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.312659 master-0 kubenswrapper[30420]: I0318 10:13:38.312618 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.315476 master-0 kubenswrapper[30420]: I0318 10:13:38.315436 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-session\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.316154 master-0 kubenswrapper[30420]: I0318 10:13:38.316103 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-audit-policies\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.316324 master-0 kubenswrapper[30420]: I0318 10:13:38.316277 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-login\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.317584 master-0 kubenswrapper[30420]: I0318 10:13:38.317538 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.319484 master-0 kubenswrapper[30420]: I0318 10:13:38.319442 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.319741 master-0 kubenswrapper[30420]: I0318 10:13:38.319685 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-error\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.319913 master-0 kubenswrapper[30420]: I0318 10:13:38.319894 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.320566 master-0 kubenswrapper[30420]: I0318 10:13:38.320512 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.345286 master-0 kubenswrapper[30420]: I0318 10:13:38.338307 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8szd\" (UniqueName: \"kubernetes.io/projected/c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141-kube-api-access-p8szd\") pod \"cni-sysctl-allowlist-ds-dqvx5\" (UID: \"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.345286 master-0 kubenswrapper[30420]: I0318 10:13:38.340457 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2hq\" (UniqueName: \"kubernetes.io/projected/edc60dd5-333f-44bc-bb10-f10673c59074-kube-api-access-ms2hq\") pod \"oauth-openshift-7c7b74cb9b-hkblm\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.352351 master-0 kubenswrapper[30420]: I0318 10:13:38.352290 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm"] Mar 18 10:13:38.382101 master-0 kubenswrapper[30420]: I0318 10:13:38.381999 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:38.399245 master-0 kubenswrapper[30420]: I0318 10:13:38.399195 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:38.422166 master-0 kubenswrapper[30420]: W0318 10:13:38.421910 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc72cb4f2_d0f2_4f20_a3b6_bf6ccd17e141.slice/crio-89412381f706b926defd3d3015f12fb9c93c8ae43a869d6b89b11b2a34615c3c WatchSource:0}: Error finding container 89412381f706b926defd3d3015f12fb9c93c8ae43a869d6b89b11b2a34615c3c: Status 404 returned error can't find the container with id 89412381f706b926defd3d3015f12fb9c93c8ae43a869d6b89b11b2a34615c3c Mar 18 10:13:38.760765 master-0 kubenswrapper[30420]: I0318 10:13:38.760579 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 10:13:38.814356 master-0 kubenswrapper[30420]: I0318 10:13:38.814301 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm"] Mar 18 10:13:38.824756 master-0 kubenswrapper[30420]: W0318 10:13:38.824705 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedc60dd5_333f_44bc_bb10_f10673c59074.slice/crio-7104e089085ccb21356703553eb0de595cd957d72f937c0fc9cc0a6e933d1d6c WatchSource:0}: Error finding container 7104e089085ccb21356703553eb0de595cd957d72f937c0fc9cc0a6e933d1d6c: Status 404 returned error can't find the container with id 7104e089085ccb21356703553eb0de595cd957d72f937c0fc9cc0a6e933d1d6c Mar 18 10:13:38.827176 master-0 kubenswrapper[30420]: I0318 10:13:38.827117 30420 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 10:13:38.854369 master-0 kubenswrapper[30420]: I0318 10:13:38.854308 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 10:13:38.861544 master-0 kubenswrapper[30420]: I0318 10:13:38.861498 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 10:13:39.260894 master-0 kubenswrapper[30420]: I0318 10:13:39.260682 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 10:13:39.276565 master-0 kubenswrapper[30420]: I0318 10:13:39.276436 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-vsqqr" Mar 18 10:13:39.397654 master-0 kubenswrapper[30420]: I0318 10:13:39.397584 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 10:13:39.408405 master-0 kubenswrapper[30420]: I0318 10:13:39.408367 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 10:13:39.423989 master-0 kubenswrapper[30420]: I0318 10:13:39.423942 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" event={"ID":"edc60dd5-333f-44bc-bb10-f10673c59074","Type":"ContainerStarted","Data":"7104e089085ccb21356703553eb0de595cd957d72f937c0fc9cc0a6e933d1d6c"} Mar 18 10:13:39.426074 master-0 kubenswrapper[30420]: I0318 10:13:39.426002 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" event={"ID":"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141","Type":"ContainerStarted","Data":"eae251a75b399419a54de069f41bb6f063d9b74461f5d4c6b9bd7b15a018c2ed"} Mar 18 10:13:39.426074 master-0 kubenswrapper[30420]: I0318 10:13:39.426063 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" event={"ID":"c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141","Type":"ContainerStarted","Data":"89412381f706b926defd3d3015f12fb9c93c8ae43a869d6b89b11b2a34615c3c"} Mar 18 10:13:39.427001 master-0 kubenswrapper[30420]: I0318 10:13:39.426543 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:39.441933 master-0 kubenswrapper[30420]: I0318 10:13:39.441856 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" podStartSLOduration=217.441817579 podStartE2EDuration="3m37.441817579s" podCreationTimestamp="2026-03-18 10:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:13:39.441802399 +0000 UTC m=+183.494548338" watchObservedRunningTime="2026-03-18 10:13:39.441817579 +0000 UTC m=+183.494563518" Mar 18 10:13:39.457501 master-0 kubenswrapper[30420]: I0318 10:13:39.457463 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-dqvx5" Mar 18 10:13:39.527889 master-0 kubenswrapper[30420]: I0318 10:13:39.527774 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 10:13:39.725974 master-0 kubenswrapper[30420]: I0318 10:13:39.725928 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 10:13:39.995791 master-0 kubenswrapper[30420]: I0318 10:13:39.995644 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 10:13:40.417603 master-0 kubenswrapper[30420]: I0318 10:13:40.417551 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 10:13:41.444642 master-0 kubenswrapper[30420]: I0318 10:13:41.444544 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" event={"ID":"edc60dd5-333f-44bc-bb10-f10673c59074","Type":"ContainerStarted","Data":"3c865a915fa70c9713900bc74ae8ce02817ffa929bdcfc9a047b9dd914cf416e"} Mar 18 10:13:41.501151 master-0 kubenswrapper[30420]: I0318 10:13:41.500998 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" podStartSLOduration=134.780903276 podStartE2EDuration="2m16.50097287s" podCreationTimestamp="2026-03-18 10:11:25 +0000 UTC" firstStartedPulling="2026-03-18 10:13:38.827074007 +0000 UTC m=+182.879819936" lastFinishedPulling="2026-03-18 10:13:40.547143611 +0000 UTC m=+184.599889530" observedRunningTime="2026-03-18 10:13:41.497517163 +0000 UTC m=+185.550263132" watchObservedRunningTime="2026-03-18 10:13:41.50097287 +0000 UTC m=+185.553718829" Mar 18 10:13:42.453561 master-0 kubenswrapper[30420]: I0318 10:13:42.453502 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:42.462809 master-0 kubenswrapper[30420]: I0318 10:13:42.462756 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:13:46.074262 master-0 kubenswrapper[30420]: I0318 10:13:46.074178 30420 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 10:13:46.075025 master-0 kubenswrapper[30420]: I0318 10:13:46.074568 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" containerID="cri-o://5a61a2c573738e71fed75545f604a081729d6be677df07f48a9700a49bbc8e27" gracePeriod=5 Mar 18 10:13:51.531763 master-0 kubenswrapper[30420]: I0318 10:13:51.531568 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 18 10:13:51.531763 master-0 kubenswrapper[30420]: I0318 10:13:51.531643 30420 generic.go:334] "Generic (PLEG): container finished" podID="ebbfbf2b56df0323ba118d68bfdad8b9" containerID="5a61a2c573738e71fed75545f604a081729d6be677df07f48a9700a49bbc8e27" exitCode=137 Mar 18 10:13:51.659238 master-0 kubenswrapper[30420]: I0318 10:13:51.659164 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 18 10:13:51.659238 master-0 kubenswrapper[30420]: I0318 10:13:51.659237 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:13:51.743380 master-0 kubenswrapper[30420]: I0318 10:13:51.743305 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 10:13:51.743605 master-0 kubenswrapper[30420]: I0318 10:13:51.743396 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 10:13:51.743605 master-0 kubenswrapper[30420]: I0318 10:13:51.743487 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 10:13:51.743605 master-0 kubenswrapper[30420]: I0318 10:13:51.743470 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log" (OuterVolumeSpecName: "var-log") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:13:51.743713 master-0 kubenswrapper[30420]: I0318 10:13:51.743528 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:13:51.743713 master-0 kubenswrapper[30420]: I0318 10:13:51.743556 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:13:51.743713 master-0 kubenswrapper[30420]: I0318 10:13:51.743581 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 10:13:51.743804 master-0 kubenswrapper[30420]: I0318 10:13:51.743726 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 10:13:51.743889 master-0 kubenswrapper[30420]: I0318 10:13:51.743803 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests" (OuterVolumeSpecName: "manifests") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:13:51.744216 master-0 kubenswrapper[30420]: I0318 10:13:51.744168 30420 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 10:13:51.744265 master-0 kubenswrapper[30420]: I0318 10:13:51.744222 30420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:13:51.744265 master-0 kubenswrapper[30420]: I0318 10:13:51.744247 30420 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:13:51.744329 master-0 kubenswrapper[30420]: I0318 10:13:51.744267 30420 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 10:13:51.749173 master-0 kubenswrapper[30420]: I0318 10:13:51.749109 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:13:51.845948 master-0 kubenswrapper[30420]: I0318 10:13:51.845860 30420 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:13:52.193054 master-0 kubenswrapper[30420]: I0318 10:13:52.192960 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" path="/var/lib/kubelet/pods/ebbfbf2b56df0323ba118d68bfdad8b9/volumes" Mar 18 10:13:52.542462 master-0 kubenswrapper[30420]: I0318 10:13:52.542401 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 18 10:13:52.542998 master-0 kubenswrapper[30420]: I0318 10:13:52.542511 30420 scope.go:117] "RemoveContainer" containerID="5a61a2c573738e71fed75545f604a081729d6be677df07f48a9700a49bbc8e27" Mar 18 10:13:52.542998 master-0 kubenswrapper[30420]: I0318 10:13:52.542594 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 10:14:03.645501 master-0 kubenswrapper[30420]: I0318 10:14:03.645396 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager/1.log" Mar 18 10:14:03.649320 master-0 kubenswrapper[30420]: I0318 10:14:03.649285 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager/0.log" Mar 18 10:14:03.649467 master-0 kubenswrapper[30420]: I0318 10:14:03.649444 30420 generic.go:334] "Generic (PLEG): container finished" podID="3ddfa5bb627414042dcc2d2204092c5a" containerID="6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f" exitCode=137 Mar 18 10:14:03.649613 master-0 kubenswrapper[30420]: I0318 10:14:03.649533 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerDied","Data":"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f"} Mar 18 10:14:03.649703 master-0 kubenswrapper[30420]: I0318 10:14:03.649665 30420 scope.go:117] "RemoveContainer" containerID="fdfbe791c7dc81669c0055767b2119c9a2cf184b178248ae50fb983ef7ccd9a8" Mar 18 10:14:04.663992 master-0 kubenswrapper[30420]: I0318 10:14:04.663897 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager/1.log" Mar 18 10:14:06.687171 master-0 kubenswrapper[30420]: I0318 10:14:06.687086 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager/1.log" Mar 18 10:14:06.688495 master-0 kubenswrapper[30420]: I0318 10:14:06.688432 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3ddfa5bb627414042dcc2d2204092c5a","Type":"ContainerStarted","Data":"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde"} Mar 18 10:14:12.583169 master-0 kubenswrapper[30420]: I0318 10:14:12.583076 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:14:12.590173 master-0 kubenswrapper[30420]: I0318 10:14:12.590114 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:14:12.743634 master-0 kubenswrapper[30420]: I0318 10:14:12.743555 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:14:25.212056 master-0 kubenswrapper[30420]: I0318 10:14:25.211988 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-mh2fn"] Mar 18 10:14:25.212684 master-0 kubenswrapper[30420]: E0318 10:14:25.212295 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 10:14:25.212684 master-0 kubenswrapper[30420]: I0318 10:14:25.212312 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 10:14:25.212684 master-0 kubenswrapper[30420]: I0318 10:14:25.212421 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 10:14:25.212949 master-0 kubenswrapper[30420]: I0318 10:14:25.212920 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.218143 master-0 kubenswrapper[30420]: I0318 10:14:25.218078 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4xtjm" Mar 18 10:14:25.218316 master-0 kubenswrapper[30420]: I0318 10:14:25.218103 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 10:14:25.386913 master-0 kubenswrapper[30420]: I0318 10:14:25.386795 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9fba458a-8c86-4d0a-8efb-266a84f62a9a-serviceca\") pod \"node-ca-mh2fn\" (UID: \"9fba458a-8c86-4d0a-8efb-266a84f62a9a\") " pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.387129 master-0 kubenswrapper[30420]: I0318 10:14:25.386967 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8lbj\" (UniqueName: \"kubernetes.io/projected/9fba458a-8c86-4d0a-8efb-266a84f62a9a-kube-api-access-j8lbj\") pod \"node-ca-mh2fn\" (UID: \"9fba458a-8c86-4d0a-8efb-266a84f62a9a\") " pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.387129 master-0 kubenswrapper[30420]: I0318 10:14:25.387004 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fba458a-8c86-4d0a-8efb-266a84f62a9a-host\") pod \"node-ca-mh2fn\" (UID: \"9fba458a-8c86-4d0a-8efb-266a84f62a9a\") " pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.488979 master-0 kubenswrapper[30420]: I0318 10:14:25.488855 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9fba458a-8c86-4d0a-8efb-266a84f62a9a-serviceca\") pod \"node-ca-mh2fn\" (UID: \"9fba458a-8c86-4d0a-8efb-266a84f62a9a\") " pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.488979 master-0 kubenswrapper[30420]: I0318 10:14:25.488957 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8lbj\" (UniqueName: \"kubernetes.io/projected/9fba458a-8c86-4d0a-8efb-266a84f62a9a-kube-api-access-j8lbj\") pod \"node-ca-mh2fn\" (UID: \"9fba458a-8c86-4d0a-8efb-266a84f62a9a\") " pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.489238 master-0 kubenswrapper[30420]: I0318 10:14:25.489006 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fba458a-8c86-4d0a-8efb-266a84f62a9a-host\") pod \"node-ca-mh2fn\" (UID: \"9fba458a-8c86-4d0a-8efb-266a84f62a9a\") " pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.489238 master-0 kubenswrapper[30420]: I0318 10:14:25.489122 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9fba458a-8c86-4d0a-8efb-266a84f62a9a-host\") pod \"node-ca-mh2fn\" (UID: \"9fba458a-8c86-4d0a-8efb-266a84f62a9a\") " pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.489840 master-0 kubenswrapper[30420]: I0318 10:14:25.489775 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9fba458a-8c86-4d0a-8efb-266a84f62a9a-serviceca\") pod \"node-ca-mh2fn\" (UID: \"9fba458a-8c86-4d0a-8efb-266a84f62a9a\") " pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.510508 master-0 kubenswrapper[30420]: I0318 10:14:25.510420 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8lbj\" (UniqueName: \"kubernetes.io/projected/9fba458a-8c86-4d0a-8efb-266a84f62a9a-kube-api-access-j8lbj\") pod \"node-ca-mh2fn\" (UID: \"9fba458a-8c86-4d0a-8efb-266a84f62a9a\") " pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.535452 master-0 kubenswrapper[30420]: I0318 10:14:25.535389 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mh2fn" Mar 18 10:14:25.557812 master-0 kubenswrapper[30420]: W0318 10:14:25.557750 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fba458a_8c86_4d0a_8efb_266a84f62a9a.slice/crio-3d661eba642c74d110d94e742b7cff845513f5a586de01fbdadd0c341046ccc0 WatchSource:0}: Error finding container 3d661eba642c74d110d94e742b7cff845513f5a586de01fbdadd0c341046ccc0: Status 404 returned error can't find the container with id 3d661eba642c74d110d94e742b7cff845513f5a586de01fbdadd0c341046ccc0 Mar 18 10:14:25.607470 master-0 kubenswrapper[30420]: I0318 10:14:25.607424 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:14:25.849404 master-0 kubenswrapper[30420]: I0318 10:14:25.849346 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mh2fn" event={"ID":"9fba458a-8c86-4d0a-8efb-266a84f62a9a","Type":"ContainerStarted","Data":"3d661eba642c74d110d94e742b7cff845513f5a586de01fbdadd0c341046ccc0"} Mar 18 10:14:27.863652 master-0 kubenswrapper[30420]: I0318 10:14:27.863571 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mh2fn" event={"ID":"9fba458a-8c86-4d0a-8efb-266a84f62a9a","Type":"ContainerStarted","Data":"3067f1a0da634acc188bb7da045068f18a27f3f811683aabe8a01e1e957dd638"} Mar 18 10:14:28.890550 master-0 kubenswrapper[30420]: I0318 10:14:28.890446 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-mh2fn" podStartSLOduration=1.88879625 podStartE2EDuration="3.890427694s" podCreationTimestamp="2026-03-18 10:14:25 +0000 UTC" firstStartedPulling="2026-03-18 10:14:25.559912196 +0000 UTC m=+229.612658125" lastFinishedPulling="2026-03-18 10:14:27.56154364 +0000 UTC m=+231.614289569" observedRunningTime="2026-03-18 10:14:28.887664995 +0000 UTC m=+232.940410934" watchObservedRunningTime="2026-03-18 10:14:28.890427694 +0000 UTC m=+232.943173623" Mar 18 10:14:58.826581 master-0 kubenswrapper[30420]: I0318 10:14:58.826509 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9"] Mar 18 10:14:58.827331 master-0 kubenswrapper[30420]: I0318 10:14:58.826773 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" podUID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerName="controller-manager" containerID="cri-o://1c0004ed0ea941f68b537e3f18e4eff3370d5b413fdcbd5d92b3955c2e83f6ad" gracePeriod=30 Mar 18 10:14:58.942945 master-0 kubenswrapper[30420]: I0318 10:14:58.942850 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68"] Mar 18 10:14:58.943287 master-0 kubenswrapper[30420]: I0318 10:14:58.943135 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" podUID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerName="route-controller-manager" containerID="cri-o://890a35c9e75544981cbb11efe21b82c439f21326c21abe7bb6e440e5194299e3" gracePeriod=30 Mar 18 10:14:59.113910 master-0 kubenswrapper[30420]: I0318 10:14:59.111666 30420 generic.go:334] "Generic (PLEG): container finished" podID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerID="890a35c9e75544981cbb11efe21b82c439f21326c21abe7bb6e440e5194299e3" exitCode=0 Mar 18 10:14:59.113910 master-0 kubenswrapper[30420]: I0318 10:14:59.111724 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" event={"ID":"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d","Type":"ContainerDied","Data":"890a35c9e75544981cbb11efe21b82c439f21326c21abe7bb6e440e5194299e3"} Mar 18 10:14:59.113910 master-0 kubenswrapper[30420]: I0318 10:14:59.111755 30420 scope.go:117] "RemoveContainer" containerID="ef56f38c2bc505e5fbc078e115510767e1b06d3c1193709a420591be902fdca8" Mar 18 10:14:59.115087 master-0 kubenswrapper[30420]: I0318 10:14:59.115067 30420 generic.go:334] "Generic (PLEG): container finished" podID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerID="1c0004ed0ea941f68b537e3f18e4eff3370d5b413fdcbd5d92b3955c2e83f6ad" exitCode=0 Mar 18 10:14:59.115153 master-0 kubenswrapper[30420]: I0318 10:14:59.115092 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" event={"ID":"9fc664ff-2e8f-441d-82dc-8f21c1d362d7","Type":"ContainerDied","Data":"1c0004ed0ea941f68b537e3f18e4eff3370d5b413fdcbd5d92b3955c2e83f6ad"} Mar 18 10:14:59.139160 master-0 kubenswrapper[30420]: I0318 10:14:59.139091 30420 scope.go:117] "RemoveContainer" containerID="6959115a6f11e9fd2881ca4214b94da71213aad3f3ef00ebec36ed62d0816399" Mar 18 10:14:59.237787 master-0 kubenswrapper[30420]: I0318 10:14:59.237687 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:14:59.298656 master-0 kubenswrapper[30420]: I0318 10:14:59.298590 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config\") pod \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " Mar 18 10:14:59.298931 master-0 kubenswrapper[30420]: I0318 10:14:59.298731 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca\") pod \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " Mar 18 10:14:59.298993 master-0 kubenswrapper[30420]: I0318 10:14:59.298936 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles\") pod \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " Mar 18 10:14:59.299043 master-0 kubenswrapper[30420]: I0318 10:14:59.299012 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert\") pod \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " Mar 18 10:14:59.299798 master-0 kubenswrapper[30420]: I0318 10:14:59.299294 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x46bf\" (UniqueName: \"kubernetes.io/projected/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-kube-api-access-x46bf\") pod \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\" (UID: \"9fc664ff-2e8f-441d-82dc-8f21c1d362d7\") " Mar 18 10:14:59.299798 master-0 kubenswrapper[30420]: I0318 10:14:59.299371 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "9fc664ff-2e8f-441d-82dc-8f21c1d362d7" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:14:59.299798 master-0 kubenswrapper[30420]: I0318 10:14:59.299778 30420 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:14:59.300032 master-0 kubenswrapper[30420]: I0318 10:14:59.299801 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9fc664ff-2e8f-441d-82dc-8f21c1d362d7" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:14:59.300706 master-0 kubenswrapper[30420]: I0318 10:14:59.300669 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config" (OuterVolumeSpecName: "config") pod "9fc664ff-2e8f-441d-82dc-8f21c1d362d7" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:14:59.302716 master-0 kubenswrapper[30420]: I0318 10:14:59.302662 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9fc664ff-2e8f-441d-82dc-8f21c1d362d7" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:14:59.304028 master-0 kubenswrapper[30420]: I0318 10:14:59.303952 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-kube-api-access-x46bf" (OuterVolumeSpecName: "kube-api-access-x46bf") pod "9fc664ff-2e8f-441d-82dc-8f21c1d362d7" (UID: "9fc664ff-2e8f-441d-82dc-8f21c1d362d7"). InnerVolumeSpecName "kube-api-access-x46bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:14:59.338740 master-0 kubenswrapper[30420]: I0318 10:14:59.338652 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:14:59.401431 master-0 kubenswrapper[30420]: I0318 10:14:59.401317 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config\") pod \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " Mar 18 10:14:59.401681 master-0 kubenswrapper[30420]: I0318 10:14:59.401501 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpk5h\" (UniqueName: \"kubernetes.io/projected/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-kube-api-access-gpk5h\") pod \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " Mar 18 10:14:59.401681 master-0 kubenswrapper[30420]: I0318 10:14:59.401539 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca\") pod \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " Mar 18 10:14:59.401681 master-0 kubenswrapper[30420]: I0318 10:14:59.401570 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert\") pod \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\" (UID: \"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d\") " Mar 18 10:14:59.402091 master-0 kubenswrapper[30420]: I0318 10:14:59.401911 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x46bf\" (UniqueName: \"kubernetes.io/projected/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-kube-api-access-x46bf\") on node \"master-0\" DevicePath \"\"" Mar 18 10:14:59.402091 master-0 kubenswrapper[30420]: I0318 10:14:59.401934 30420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:14:59.402091 master-0 kubenswrapper[30420]: I0318 10:14:59.401948 30420 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 10:14:59.402091 master-0 kubenswrapper[30420]: I0318 10:14:59.401961 30420 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fc664ff-2e8f-441d-82dc-8f21c1d362d7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:14:59.402091 master-0 kubenswrapper[30420]: I0318 10:14:59.401971 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config" (OuterVolumeSpecName: "config") pod "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:14:59.402454 master-0 kubenswrapper[30420]: I0318 10:14:59.402401 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:14:59.404442 master-0 kubenswrapper[30420]: I0318 10:14:59.404375 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-kube-api-access-gpk5h" (OuterVolumeSpecName: "kube-api-access-gpk5h") pod "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d"). InnerVolumeSpecName "kube-api-access-gpk5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:14:59.405306 master-0 kubenswrapper[30420]: I0318 10:14:59.405276 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" (UID: "8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:14:59.502958 master-0 kubenswrapper[30420]: I0318 10:14:59.502883 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpk5h\" (UniqueName: \"kubernetes.io/projected/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-kube-api-access-gpk5h\") on node \"master-0\" DevicePath \"\"" Mar 18 10:14:59.502958 master-0 kubenswrapper[30420]: I0318 10:14:59.502927 30420 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:14:59.502958 master-0 kubenswrapper[30420]: I0318 10:14:59.502940 30420 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:14:59.502958 master-0 kubenswrapper[30420]: I0318 10:14:59.502950 30420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:15:00.127380 master-0 kubenswrapper[30420]: I0318 10:15:00.127210 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" Mar 18 10:15:00.127380 master-0 kubenswrapper[30420]: I0318 10:15:00.127255 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9" event={"ID":"9fc664ff-2e8f-441d-82dc-8f21c1d362d7","Type":"ContainerDied","Data":"2b738d6ab8a2079028f3f1e5804df92e50d8884090bb1653ec14e4d63a6afccd"} Mar 18 10:15:00.127380 master-0 kubenswrapper[30420]: I0318 10:15:00.127374 30420 scope.go:117] "RemoveContainer" containerID="1c0004ed0ea941f68b537e3f18e4eff3370d5b413fdcbd5d92b3955c2e83f6ad" Mar 18 10:15:00.131221 master-0 kubenswrapper[30420]: I0318 10:15:00.130570 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" event={"ID":"8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d","Type":"ContainerDied","Data":"b588169f9714563a6db5379251857ae747425b95554009dbd48c296b2e82b297"} Mar 18 10:15:00.131221 master-0 kubenswrapper[30420]: I0318 10:15:00.130675 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68" Mar 18 10:15:00.157291 master-0 kubenswrapper[30420]: I0318 10:15:00.156139 30420 scope.go:117] "RemoveContainer" containerID="890a35c9e75544981cbb11efe21b82c439f21326c21abe7bb6e440e5194299e3" Mar 18 10:15:00.244301 master-0 kubenswrapper[30420]: I0318 10:15:00.244129 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6"] Mar 18 10:15:00.244687 master-0 kubenswrapper[30420]: E0318 10:15:00.244653 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerName="controller-manager" Mar 18 10:15:00.244687 master-0 kubenswrapper[30420]: I0318 10:15:00.244673 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerName="controller-manager" Mar 18 10:15:00.244802 master-0 kubenswrapper[30420]: E0318 10:15:00.244697 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerName="route-controller-manager" Mar 18 10:15:00.244802 master-0 kubenswrapper[30420]: I0318 10:15:00.244704 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerName="route-controller-manager" Mar 18 10:15:00.244802 master-0 kubenswrapper[30420]: E0318 10:15:00.244721 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerName="route-controller-manager" Mar 18 10:15:00.246650 master-0 kubenswrapper[30420]: I0318 10:15:00.246603 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerName="route-controller-manager" Mar 18 10:15:00.246789 master-0 kubenswrapper[30420]: I0318 10:15:00.246749 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerName="route-controller-manager" Mar 18 10:15:00.246899 master-0 kubenswrapper[30420]: I0318 10:15:00.246837 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerName="controller-manager" Mar 18 10:15:00.246899 master-0 kubenswrapper[30420]: I0318 10:15:00.246856 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" containerName="route-controller-manager" Mar 18 10:15:00.246899 master-0 kubenswrapper[30420]: I0318 10:15:00.246867 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerName="controller-manager" Mar 18 10:15:00.247307 master-0 kubenswrapper[30420]: I0318 10:15:00.247267 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.314057 master-0 kubenswrapper[30420]: I0318 10:15:00.313728 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-czmbt" Mar 18 10:15:00.314057 master-0 kubenswrapper[30420]: I0318 10:15:00.314017 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 10:15:00.314296 master-0 kubenswrapper[30420]: I0318 10:15:00.314168 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 10:15:00.326526 master-0 kubenswrapper[30420]: I0318 10:15:00.326473 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9"] Mar 18 10:15:00.326916 master-0 kubenswrapper[30420]: E0318 10:15:00.326895 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerName="controller-manager" Mar 18 10:15:00.326916 master-0 kubenswrapper[30420]: I0318 10:15:00.326913 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" containerName="controller-manager" Mar 18 10:15:00.327905 master-0 kubenswrapper[30420]: I0318 10:15:00.327845 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 10:15:00.327905 master-0 kubenswrapper[30420]: I0318 10:15:00.327885 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.328793 master-0 kubenswrapper[30420]: I0318 10:15:00.328737 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 10:15:00.329155 master-0 kubenswrapper[30420]: I0318 10:15:00.329113 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-c5mc5" Mar 18 10:15:00.329487 master-0 kubenswrapper[30420]: I0318 10:15:00.329444 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 10:15:00.329672 master-0 kubenswrapper[30420]: I0318 10:15:00.329635 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 10:15:00.330897 master-0 kubenswrapper[30420]: I0318 10:15:00.330748 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 10:15:00.331682 master-0 kubenswrapper[30420]: I0318 10:15:00.331353 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 10:15:00.331682 master-0 kubenswrapper[30420]: I0318 10:15:00.331618 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 10:15:00.331887 master-0 kubenswrapper[30420]: I0318 10:15:00.331644 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 10:15:00.334319 master-0 kubenswrapper[30420]: I0318 10:15:00.334151 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6"] Mar 18 10:15:00.341883 master-0 kubenswrapper[30420]: I0318 10:15:00.339087 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 10:15:00.346859 master-0 kubenswrapper[30420]: I0318 10:15:00.346797 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9"] Mar 18 10:15:00.359135 master-0 kubenswrapper[30420]: I0318 10:15:00.358332 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9"] Mar 18 10:15:00.367436 master-0 kubenswrapper[30420]: I0318 10:15:00.367100 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c87d45bb4-vxcx9"] Mar 18 10:15:00.380340 master-0 kubenswrapper[30420]: I0318 10:15:00.380273 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68"] Mar 18 10:15:00.384139 master-0 kubenswrapper[30420]: I0318 10:15:00.384090 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5657df7dd8-4pp68"] Mar 18 10:15:00.418034 master-0 kubenswrapper[30420]: I0318 10:15:00.417973 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d526m\" (UniqueName: \"kubernetes.io/projected/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-kube-api-access-d526m\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.418236 master-0 kubenswrapper[30420]: I0318 10:15:00.418047 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-proxy-ca-bundles\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.418236 master-0 kubenswrapper[30420]: I0318 10:15:00.418185 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-serving-cert\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.418302 master-0 kubenswrapper[30420]: I0318 10:15:00.418278 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-config\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.418693 master-0 kubenswrapper[30420]: I0318 10:15:00.418656 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-client-ca\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.520446 master-0 kubenswrapper[30420]: I0318 10:15:00.520287 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-serving-cert\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.520446 master-0 kubenswrapper[30420]: I0318 10:15:00.520342 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-config\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.520446 master-0 kubenswrapper[30420]: I0318 10:15:00.520402 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-config\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.520446 master-0 kubenswrapper[30420]: I0318 10:15:00.520430 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-client-ca\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.520743 master-0 kubenswrapper[30420]: I0318 10:15:00.520469 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d526m\" (UniqueName: \"kubernetes.io/projected/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-kube-api-access-d526m\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.520886 master-0 kubenswrapper[30420]: I0318 10:15:00.520858 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrwfj\" (UniqueName: \"kubernetes.io/projected/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-kube-api-access-mrwfj\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.520965 master-0 kubenswrapper[30420]: I0318 10:15:00.520899 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-client-ca\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.520965 master-0 kubenswrapper[30420]: I0318 10:15:00.520930 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-proxy-ca-bundles\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.521227 master-0 kubenswrapper[30420]: I0318 10:15:00.521188 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-serving-cert\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.521741 master-0 kubenswrapper[30420]: I0318 10:15:00.521714 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-client-ca\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.522315 master-0 kubenswrapper[30420]: I0318 10:15:00.521977 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-proxy-ca-bundles\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.522315 master-0 kubenswrapper[30420]: I0318 10:15:00.522106 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-config\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.527928 master-0 kubenswrapper[30420]: I0318 10:15:00.524233 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-serving-cert\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.541654 master-0 kubenswrapper[30420]: I0318 10:15:00.541585 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d526m\" (UniqueName: \"kubernetes.io/projected/803d47cf-f96a-4ea0-8a82-1624bbfd6b3a-kube-api-access-d526m\") pod \"controller-manager-6c5f9d9f94-gmwr6\" (UID: \"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a\") " pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.622483 master-0 kubenswrapper[30420]: I0318 10:15:00.622427 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrwfj\" (UniqueName: \"kubernetes.io/projected/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-kube-api-access-mrwfj\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.622483 master-0 kubenswrapper[30420]: I0318 10:15:00.622489 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-client-ca\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.622712 master-0 kubenswrapper[30420]: I0318 10:15:00.622653 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-serving-cert\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.622712 master-0 kubenswrapper[30420]: I0318 10:15:00.622690 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-config\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.623586 master-0 kubenswrapper[30420]: I0318 10:15:00.623557 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-client-ca\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.623953 master-0 kubenswrapper[30420]: I0318 10:15:00.623926 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-config\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.626148 master-0 kubenswrapper[30420]: I0318 10:15:00.626118 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-serving-cert\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.639246 master-0 kubenswrapper[30420]: I0318 10:15:00.639193 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrwfj\" (UniqueName: \"kubernetes.io/projected/52bfd01e-9bd9-47ce-944a-6d8fb76108e6-kube-api-access-mrwfj\") pod \"route-controller-manager-6b698c899f-l7lq9\" (UID: \"52bfd01e-9bd9-47ce-944a-6d8fb76108e6\") " pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:00.655133 master-0 kubenswrapper[30420]: I0318 10:15:00.655068 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:00.671377 master-0 kubenswrapper[30420]: I0318 10:15:00.671324 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:01.085347 master-0 kubenswrapper[30420]: I0318 10:15:01.084626 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9"] Mar 18 10:15:01.128969 master-0 kubenswrapper[30420]: I0318 10:15:01.128921 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6"] Mar 18 10:15:01.150615 master-0 kubenswrapper[30420]: I0318 10:15:01.150536 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" event={"ID":"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a","Type":"ContainerStarted","Data":"c1305d4395b801b0780a855dd5062bf8a2b03bd00826ba5b199c1d8a2a07b842"} Mar 18 10:15:01.174981 master-0 kubenswrapper[30420]: I0318 10:15:01.153036 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" event={"ID":"52bfd01e-9bd9-47ce-944a-6d8fb76108e6","Type":"ContainerStarted","Data":"40d2c2374de78fe3fb82f77547620be5b99cfbd46d7673fd258dec248b2e5748"} Mar 18 10:15:02.162606 master-0 kubenswrapper[30420]: I0318 10:15:02.162521 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" event={"ID":"52bfd01e-9bd9-47ce-944a-6d8fb76108e6","Type":"ContainerStarted","Data":"4bea1ffa168a4f6b57bf330cbe7b47659490e7bb6613afb8d251f7165a2ed1f9"} Mar 18 10:15:02.164062 master-0 kubenswrapper[30420]: I0318 10:15:02.164000 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" event={"ID":"803d47cf-f96a-4ea0-8a82-1624bbfd6b3a","Type":"ContainerStarted","Data":"ab82a621c02a64b1021f6b5b76f95c16b29c2005c17585d3da6b9031ab18bb7c"} Mar 18 10:15:02.164278 master-0 kubenswrapper[30420]: I0318 10:15:02.164247 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:02.178435 master-0 kubenswrapper[30420]: I0318 10:15:02.178369 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d" path="/var/lib/kubelet/pods/8ef5f9ee-b76a-4d53-9e3f-e25f4e11d33d/volumes" Mar 18 10:15:02.179185 master-0 kubenswrapper[30420]: I0318 10:15:02.179152 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fc664ff-2e8f-441d-82dc-8f21c1d362d7" path="/var/lib/kubelet/pods/9fc664ff-2e8f-441d-82dc-8f21c1d362d7/volumes" Mar 18 10:15:02.180922 master-0 kubenswrapper[30420]: I0318 10:15:02.180901 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" Mar 18 10:15:02.197709 master-0 kubenswrapper[30420]: I0318 10:15:02.197623 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" podStartSLOduration=4.197602238 podStartE2EDuration="4.197602238s" podCreationTimestamp="2026-03-18 10:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:15:02.192943392 +0000 UTC m=+266.245689331" watchObservedRunningTime="2026-03-18 10:15:02.197602238 +0000 UTC m=+266.250348157" Mar 18 10:15:02.220114 master-0 kubenswrapper[30420]: I0318 10:15:02.219999 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c5f9d9f94-gmwr6" podStartSLOduration=4.219979728 podStartE2EDuration="4.219979728s" podCreationTimestamp="2026-03-18 10:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:15:02.21486556 +0000 UTC m=+266.267611499" watchObservedRunningTime="2026-03-18 10:15:02.219979728 +0000 UTC m=+266.272725657" Mar 18 10:15:03.169149 master-0 kubenswrapper[30420]: I0318 10:15:03.169100 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:03.174735 master-0 kubenswrapper[30420]: I0318 10:15:03.174690 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b698c899f-l7lq9" Mar 18 10:15:09.306680 master-0 kubenswrapper[30420]: I0318 10:15:09.306612 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 10:15:09.308657 master-0 kubenswrapper[30420]: I0318 10:15:09.308629 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.310019 master-0 kubenswrapper[30420]: I0318 10:15:09.309979 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 10:15:09.310754 master-0 kubenswrapper[30420]: I0318 10:15:09.310729 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 10:15:09.311249 master-0 kubenswrapper[30420]: I0318 10:15:09.311223 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 10:15:09.311312 master-0 kubenswrapper[30420]: I0318 10:15:09.311267 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 10:15:09.311347 master-0 kubenswrapper[30420]: I0318 10:15:09.311225 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 10:15:09.311658 master-0 kubenswrapper[30420]: I0318 10:15:09.311633 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 10:15:09.311911 master-0 kubenswrapper[30420]: I0318 10:15:09.311886 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 10:15:09.322677 master-0 kubenswrapper[30420]: I0318 10:15:09.322623 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 10:15:09.340146 master-0 kubenswrapper[30420]: I0318 10:15:09.340065 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 10:15:09.407388 master-0 kubenswrapper[30420]: I0318 10:15:09.407330 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.407580 master-0 kubenswrapper[30420]: I0318 10:15:09.407402 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-web-config\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.407580 master-0 kubenswrapper[30420]: I0318 10:15:09.407490 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.407664 master-0 kubenswrapper[30420]: I0318 10:15:09.407639 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.407745 master-0 kubenswrapper[30420]: I0318 10:15:09.407708 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.408078 master-0 kubenswrapper[30420]: I0318 10:15:09.407802 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.408078 master-0 kubenswrapper[30420]: I0318 10:15:09.407856 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.408078 master-0 kubenswrapper[30420]: I0318 10:15:09.407933 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.408078 master-0 kubenswrapper[30420]: I0318 10:15:09.407969 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjjcq\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-kube-api-access-vjjcq\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.408078 master-0 kubenswrapper[30420]: I0318 10:15:09.408005 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.408078 master-0 kubenswrapper[30420]: I0318 10:15:09.408033 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-out\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.408078 master-0 kubenswrapper[30420]: I0318 10:15:09.408069 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-volume\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.508523 master-0 kubenswrapper[30420]: I0318 10:15:09.508457 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.508523 master-0 kubenswrapper[30420]: I0318 10:15:09.508516 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509024 master-0 kubenswrapper[30420]: I0318 10:15:09.508713 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509024 master-0 kubenswrapper[30420]: I0318 10:15:09.508796 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509024 master-0 kubenswrapper[30420]: I0318 10:15:09.508888 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509024 master-0 kubenswrapper[30420]: I0318 10:15:09.508924 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjjcq\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-kube-api-access-vjjcq\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509024 master-0 kubenswrapper[30420]: I0318 10:15:09.508957 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509024 master-0 kubenswrapper[30420]: I0318 10:15:09.508983 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-out\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509024 master-0 kubenswrapper[30420]: I0318 10:15:09.509021 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-volume\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509307 master-0 kubenswrapper[30420]: I0318 10:15:09.509071 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509307 master-0 kubenswrapper[30420]: I0318 10:15:09.509107 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-web-config\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509307 master-0 kubenswrapper[30420]: I0318 10:15:09.509133 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509307 master-0 kubenswrapper[30420]: I0318 10:15:09.509292 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.509806 master-0 kubenswrapper[30420]: I0318 10:15:09.509756 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.510021 master-0 kubenswrapper[30420]: I0318 10:15:09.509981 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.510164 master-0 kubenswrapper[30420]: E0318 10:15:09.510132 30420 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 10:15:09.510628 master-0 kubenswrapper[30420]: E0318 10:15:09.510194 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls podName:9adfdd99-ef2a-4698-8ef5-c2f97c4b6761 nodeName:}" failed. No retries permitted until 2026-03-18 10:15:10.010177476 +0000 UTC m=+274.062923485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761") : secret "alertmanager-main-tls" not found Mar 18 10:15:09.512151 master-0 kubenswrapper[30420]: I0318 10:15:09.511704 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.512782 master-0 kubenswrapper[30420]: I0318 10:15:09.512501 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.512782 master-0 kubenswrapper[30420]: I0318 10:15:09.512584 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.512782 master-0 kubenswrapper[30420]: I0318 10:15:09.512745 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-out\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.513696 master-0 kubenswrapper[30420]: I0318 10:15:09.513658 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-volume\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.517502 master-0 kubenswrapper[30420]: I0318 10:15:09.517459 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.517502 master-0 kubenswrapper[30420]: I0318 10:15:09.517494 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-web-config\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:09.525184 master-0 kubenswrapper[30420]: I0318 10:15:09.525139 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjjcq\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-kube-api-access-vjjcq\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:10.016132 master-0 kubenswrapper[30420]: I0318 10:15:10.015990 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:10.016449 master-0 kubenswrapper[30420]: E0318 10:15:10.016289 30420 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 10:15:10.016449 master-0 kubenswrapper[30420]: E0318 10:15:10.016405 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls podName:9adfdd99-ef2a-4698-8ef5-c2f97c4b6761 nodeName:}" failed. No retries permitted until 2026-03-18 10:15:11.016377819 +0000 UTC m=+275.069123778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761") : secret "alertmanager-main-tls" not found Mar 18 10:15:10.260711 master-0 kubenswrapper[30420]: I0318 10:15:10.260649 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk"] Mar 18 10:15:10.263103 master-0 kubenswrapper[30420]: I0318 10:15:10.263065 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.265357 master-0 kubenswrapper[30420]: I0318 10:15:10.265277 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 10:15:10.265786 master-0 kubenswrapper[30420]: I0318 10:15:10.265740 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 10:15:10.265786 master-0 kubenswrapper[30420]: I0318 10:15:10.265769 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 10:15:10.266086 master-0 kubenswrapper[30420]: I0318 10:15:10.266058 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 10:15:10.266191 master-0 kubenswrapper[30420]: I0318 10:15:10.266123 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-6fjbchum0p1le" Mar 18 10:15:10.266378 master-0 kubenswrapper[30420]: I0318 10:15:10.266355 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 10:15:10.271697 master-0 kubenswrapper[30420]: I0318 10:15:10.271652 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk"] Mar 18 10:15:10.419663 master-0 kubenswrapper[30420]: I0318 10:15:10.419588 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-grpc-tls\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.419663 master-0 kubenswrapper[30420]: I0318 10:15:10.419649 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zndwq\" (UniqueName: \"kubernetes.io/projected/274f890d-dc38-4220-98a2-357d86249c63-kube-api-access-zndwq\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.420651 master-0 kubenswrapper[30420]: I0318 10:15:10.419711 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.420651 master-0 kubenswrapper[30420]: I0318 10:15:10.419743 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-tls\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.420651 master-0 kubenswrapper[30420]: I0318 10:15:10.419809 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.420651 master-0 kubenswrapper[30420]: I0318 10:15:10.419888 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/274f890d-dc38-4220-98a2-357d86249c63-metrics-client-ca\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.420651 master-0 kubenswrapper[30420]: I0318 10:15:10.419969 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.420651 master-0 kubenswrapper[30420]: I0318 10:15:10.420036 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.522024 master-0 kubenswrapper[30420]: I0318 10:15:10.521800 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.522024 master-0 kubenswrapper[30420]: I0318 10:15:10.521930 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-tls\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.522024 master-0 kubenswrapper[30420]: I0318 10:15:10.521995 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.522542 master-0 kubenswrapper[30420]: I0318 10:15:10.522048 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/274f890d-dc38-4220-98a2-357d86249c63-metrics-client-ca\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.522542 master-0 kubenswrapper[30420]: I0318 10:15:10.522125 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.522542 master-0 kubenswrapper[30420]: I0318 10:15:10.522202 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.522542 master-0 kubenswrapper[30420]: I0318 10:15:10.522403 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-grpc-tls\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.522542 master-0 kubenswrapper[30420]: I0318 10:15:10.522462 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zndwq\" (UniqueName: \"kubernetes.io/projected/274f890d-dc38-4220-98a2-357d86249c63-kube-api-access-zndwq\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.528037 master-0 kubenswrapper[30420]: I0318 10:15:10.523968 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/274f890d-dc38-4220-98a2-357d86249c63-metrics-client-ca\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.529094 master-0 kubenswrapper[30420]: I0318 10:15:10.528698 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.529094 master-0 kubenswrapper[30420]: I0318 10:15:10.528757 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-tls\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.529094 master-0 kubenswrapper[30420]: I0318 10:15:10.528787 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-grpc-tls\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.529094 master-0 kubenswrapper[30420]: I0318 10:15:10.529078 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.529571 master-0 kubenswrapper[30420]: I0318 10:15:10.529165 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.531053 master-0 kubenswrapper[30420]: I0318 10:15:10.529989 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/274f890d-dc38-4220-98a2-357d86249c63-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.551740 master-0 kubenswrapper[30420]: I0318 10:15:10.551688 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zndwq\" (UniqueName: \"kubernetes.io/projected/274f890d-dc38-4220-98a2-357d86249c63-kube-api-access-zndwq\") pod \"thanos-querier-5cfdd55bb7-8m5wk\" (UID: \"274f890d-dc38-4220-98a2-357d86249c63\") " pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:10.584652 master-0 kubenswrapper[30420]: I0318 10:15:10.584594 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:11.005879 master-0 kubenswrapper[30420]: I0318 10:15:11.005657 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk"] Mar 18 10:15:11.030496 master-0 kubenswrapper[30420]: I0318 10:15:11.030396 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:11.030702 master-0 kubenswrapper[30420]: E0318 10:15:11.030639 30420 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 10:15:11.030802 master-0 kubenswrapper[30420]: E0318 10:15:11.030770 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls podName:9adfdd99-ef2a-4698-8ef5-c2f97c4b6761 nodeName:}" failed. No retries permitted until 2026-03-18 10:15:13.030742084 +0000 UTC m=+277.083488013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761") : secret "alertmanager-main-tls" not found Mar 18 10:15:11.232252 master-0 kubenswrapper[30420]: I0318 10:15:11.232108 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" event={"ID":"274f890d-dc38-4220-98a2-357d86249c63","Type":"ContainerStarted","Data":"f7cd3bbbcead95f7a9538b73cf87268299d9e4dec18a79455c08dbca0ccf39a5"} Mar 18 10:15:13.062680 master-0 kubenswrapper[30420]: I0318 10:15:13.062604 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:13.063270 master-0 kubenswrapper[30420]: E0318 10:15:13.062790 30420 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 10:15:13.063270 master-0 kubenswrapper[30420]: E0318 10:15:13.062876 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls podName:9adfdd99-ef2a-4698-8ef5-c2f97c4b6761 nodeName:}" failed. No retries permitted until 2026-03-18 10:15:17.062855897 +0000 UTC m=+281.115601836 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761") : secret "alertmanager-main-tls" not found Mar 18 10:15:13.120363 master-0 kubenswrapper[30420]: I0318 10:15:13.117406 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7d8bb64c78-vvvft"] Mar 18 10:15:13.124996 master-0 kubenswrapper[30420]: I0318 10:15:13.124926 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.127436 master-0 kubenswrapper[30420]: I0318 10:15:13.127382 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-bsa0md835fnus" Mar 18 10:15:13.135495 master-0 kubenswrapper[30420]: I0318 10:15:13.134865 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d8bb64c78-vvvft"] Mar 18 10:15:13.158047 master-0 kubenswrapper[30420]: I0318 10:15:13.157665 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-74c475bc87-xx98m"] Mar 18 10:15:13.158047 master-0 kubenswrapper[30420]: I0318 10:15:13.157944 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" podUID="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" containerName="metrics-server" containerID="cri-o://aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0" gracePeriod=170 Mar 18 10:15:13.164022 master-0 kubenswrapper[30420]: I0318 10:15:13.163949 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx62p\" (UniqueName: \"kubernetes.io/projected/2fb70bb5-3d3d-4abb-8f24-433e65792845-kube-api-access-xx62p\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.164129 master-0 kubenswrapper[30420]: I0318 10:15:13.164020 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb70bb5-3d3d-4abb-8f24-433e65792845-client-ca-bundle\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.164129 master-0 kubenswrapper[30420]: I0318 10:15:13.164061 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2fb70bb5-3d3d-4abb-8f24-433e65792845-secret-metrics-client-certs\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.164129 master-0 kubenswrapper[30420]: I0318 10:15:13.164091 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2fb70bb5-3d3d-4abb-8f24-433e65792845-audit-log\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.164129 master-0 kubenswrapper[30420]: I0318 10:15:13.164120 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fb70bb5-3d3d-4abb-8f24-433e65792845-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.164436 master-0 kubenswrapper[30420]: I0318 10:15:13.164385 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2fb70bb5-3d3d-4abb-8f24-433e65792845-secret-metrics-server-tls\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.164511 master-0 kubenswrapper[30420]: I0318 10:15:13.164486 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2fb70bb5-3d3d-4abb-8f24-433e65792845-metrics-server-audit-profiles\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.265620 master-0 kubenswrapper[30420]: I0318 10:15:13.265421 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2fb70bb5-3d3d-4abb-8f24-433e65792845-secret-metrics-server-tls\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.265620 master-0 kubenswrapper[30420]: I0318 10:15:13.265623 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2fb70bb5-3d3d-4abb-8f24-433e65792845-metrics-server-audit-profiles\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.265923 master-0 kubenswrapper[30420]: I0318 10:15:13.265660 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx62p\" (UniqueName: \"kubernetes.io/projected/2fb70bb5-3d3d-4abb-8f24-433e65792845-kube-api-access-xx62p\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.265923 master-0 kubenswrapper[30420]: I0318 10:15:13.265678 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb70bb5-3d3d-4abb-8f24-433e65792845-client-ca-bundle\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.265923 master-0 kubenswrapper[30420]: I0318 10:15:13.265693 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2fb70bb5-3d3d-4abb-8f24-433e65792845-secret-metrics-client-certs\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.265923 master-0 kubenswrapper[30420]: I0318 10:15:13.265708 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2fb70bb5-3d3d-4abb-8f24-433e65792845-audit-log\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.265923 master-0 kubenswrapper[30420]: I0318 10:15:13.265726 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fb70bb5-3d3d-4abb-8f24-433e65792845-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.266640 master-0 kubenswrapper[30420]: I0318 10:15:13.266600 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fb70bb5-3d3d-4abb-8f24-433e65792845-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.267439 master-0 kubenswrapper[30420]: I0318 10:15:13.267377 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2fb70bb5-3d3d-4abb-8f24-433e65792845-audit-log\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.267913 master-0 kubenswrapper[30420]: I0318 10:15:13.267875 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2fb70bb5-3d3d-4abb-8f24-433e65792845-metrics-server-audit-profiles\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.269555 master-0 kubenswrapper[30420]: I0318 10:15:13.269516 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb70bb5-3d3d-4abb-8f24-433e65792845-client-ca-bundle\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.269954 master-0 kubenswrapper[30420]: I0318 10:15:13.269920 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2fb70bb5-3d3d-4abb-8f24-433e65792845-secret-metrics-server-tls\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.270183 master-0 kubenswrapper[30420]: I0318 10:15:13.270136 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2fb70bb5-3d3d-4abb-8f24-433e65792845-secret-metrics-client-certs\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.283905 master-0 kubenswrapper[30420]: I0318 10:15:13.283856 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx62p\" (UniqueName: \"kubernetes.io/projected/2fb70bb5-3d3d-4abb-8f24-433e65792845-kube-api-access-xx62p\") pod \"metrics-server-7d8bb64c78-vvvft\" (UID: \"2fb70bb5-3d3d-4abb-8f24-433e65792845\") " pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.446712 master-0 kubenswrapper[30420]: I0318 10:15:13.446667 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:13.892429 master-0 kubenswrapper[30420]: I0318 10:15:13.892371 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d8bb64c78-vvvft"] Mar 18 10:15:13.895275 master-0 kubenswrapper[30420]: W0318 10:15:13.895224 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fb70bb5_3d3d_4abb_8f24_433e65792845.slice/crio-174b1c4db0ad5db02846834d69c8a8e2af91ec13e827e2200dffccdda05c0729 WatchSource:0}: Error finding container 174b1c4db0ad5db02846834d69c8a8e2af91ec13e827e2200dffccdda05c0729: Status 404 returned error can't find the container with id 174b1c4db0ad5db02846834d69c8a8e2af91ec13e827e2200dffccdda05c0729 Mar 18 10:15:14.254189 master-0 kubenswrapper[30420]: I0318 10:15:14.254016 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" event={"ID":"274f890d-dc38-4220-98a2-357d86249c63","Type":"ContainerStarted","Data":"5cf24ebd8697dfbca75d9b6eb7630a2d98f47e4a74f15c2047b593a711a46f82"} Mar 18 10:15:14.254189 master-0 kubenswrapper[30420]: I0318 10:15:14.254085 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" event={"ID":"274f890d-dc38-4220-98a2-357d86249c63","Type":"ContainerStarted","Data":"2ff2891f225a5cfeaf20350647d1538149f07ce9a4fe4069b323cb4578330b78"} Mar 18 10:15:14.254189 master-0 kubenswrapper[30420]: I0318 10:15:14.254107 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" event={"ID":"274f890d-dc38-4220-98a2-357d86249c63","Type":"ContainerStarted","Data":"9024cc37bcabf0e65deaae9c5cfbc46c3bcc640826b777152ad1e1e5cf86ae49"} Mar 18 10:15:14.257681 master-0 kubenswrapper[30420]: I0318 10:15:14.257634 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" event={"ID":"2fb70bb5-3d3d-4abb-8f24-433e65792845","Type":"ContainerStarted","Data":"21c77c14c75a6bd236fc8f24222c1a066927858190c3ef123961df5de8d08876"} Mar 18 10:15:14.257766 master-0 kubenswrapper[30420]: I0318 10:15:14.257683 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" event={"ID":"2fb70bb5-3d3d-4abb-8f24-433e65792845","Type":"ContainerStarted","Data":"174b1c4db0ad5db02846834d69c8a8e2af91ec13e827e2200dffccdda05c0729"} Mar 18 10:15:14.299110 master-0 kubenswrapper[30420]: I0318 10:15:14.299028 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" podStartSLOduration=1.29900714 podStartE2EDuration="1.29900714s" podCreationTimestamp="2026-03-18 10:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:15:14.29581821 +0000 UTC m=+278.348564159" watchObservedRunningTime="2026-03-18 10:15:14.29900714 +0000 UTC m=+278.351753109" Mar 18 10:15:14.893346 master-0 kubenswrapper[30420]: I0318 10:15:14.893274 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 10:15:14.896772 master-0 kubenswrapper[30420]: I0318 10:15:14.896714 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:14.903471 master-0 kubenswrapper[30420]: I0318 10:15:14.903420 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 10:15:14.903998 master-0 kubenswrapper[30420]: I0318 10:15:14.903975 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 10:15:14.904178 master-0 kubenswrapper[30420]: I0318 10:15:14.904154 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 10:15:14.904379 master-0 kubenswrapper[30420]: I0318 10:15:14.904351 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 10:15:14.904488 master-0 kubenswrapper[30420]: I0318 10:15:14.904468 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 10:15:14.905137 master-0 kubenswrapper[30420]: I0318 10:15:14.905106 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 10:15:14.905390 master-0 kubenswrapper[30420]: I0318 10:15:14.905360 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 10:15:14.905529 master-0 kubenswrapper[30420]: I0318 10:15:14.905502 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 10:15:14.905904 master-0 kubenswrapper[30420]: I0318 10:15:14.905870 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-cjuqtgluoqmcm" Mar 18 10:15:14.906683 master-0 kubenswrapper[30420]: I0318 10:15:14.906640 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 10:15:14.910647 master-0 kubenswrapper[30420]: I0318 10:15:14.910620 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 10:15:14.915238 master-0 kubenswrapper[30420]: I0318 10:15:14.915193 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 10:15:14.924863 master-0 kubenswrapper[30420]: I0318 10:15:14.924308 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 10:15:15.003896 master-0 kubenswrapper[30420]: I0318 10:15:15.003798 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.004136 master-0 kubenswrapper[30420]: I0318 10:15:15.004115 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.004272 master-0 kubenswrapper[30420]: I0318 10:15:15.004258 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config-out\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.004384 master-0 kubenswrapper[30420]: I0318 10:15:15.004372 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.004499 master-0 kubenswrapper[30420]: I0318 10:15:15.004482 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.004626 master-0 kubenswrapper[30420]: I0318 10:15:15.004613 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.004747 master-0 kubenswrapper[30420]: I0318 10:15:15.004733 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.004877 master-0 kubenswrapper[30420]: I0318 10:15:15.004863 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95sh2\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-kube-api-access-95sh2\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.005032 master-0 kubenswrapper[30420]: I0318 10:15:15.005018 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.005158 master-0 kubenswrapper[30420]: I0318 10:15:15.005144 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.005277 master-0 kubenswrapper[30420]: I0318 10:15:15.005264 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.005390 master-0 kubenswrapper[30420]: I0318 10:15:15.005377 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.005497 master-0 kubenswrapper[30420]: I0318 10:15:15.005485 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.005622 master-0 kubenswrapper[30420]: I0318 10:15:15.005606 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.005747 master-0 kubenswrapper[30420]: I0318 10:15:15.005734 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-web-config\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.005878 master-0 kubenswrapper[30420]: I0318 10:15:15.005863 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.006000 master-0 kubenswrapper[30420]: I0318 10:15:15.005986 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.006358 master-0 kubenswrapper[30420]: I0318 10:15:15.006346 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.107942 master-0 kubenswrapper[30420]: I0318 10:15:15.107836 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-web-config\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.107942 master-0 kubenswrapper[30420]: I0318 10:15:15.107902 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.107942 master-0 kubenswrapper[30420]: I0318 10:15:15.107924 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.107942 master-0 kubenswrapper[30420]: I0318 10:15:15.107957 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.107942 master-0 kubenswrapper[30420]: I0318 10:15:15.107981 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.107997 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108018 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config-out\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108034 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108050 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108083 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108099 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108119 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95sh2\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-kube-api-access-95sh2\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108173 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108194 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108210 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108229 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108247 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.108626 master-0 kubenswrapper[30420]: I0318 10:15:15.108270 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.111123 master-0 kubenswrapper[30420]: E0318 10:15:15.108806 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:15.111123 master-0 kubenswrapper[30420]: E0318 10:15:15.108900 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:15.608876469 +0000 UTC m=+279.661622598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:15.111123 master-0 kubenswrapper[30420]: E0318 10:15:15.109960 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 10:15:15.111123 master-0 kubenswrapper[30420]: E0318 10:15:15.110052 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:15.610031758 +0000 UTC m=+279.662777687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-tls" not found Mar 18 10:15:15.111123 master-0 kubenswrapper[30420]: I0318 10:15:15.110404 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.111555 master-0 kubenswrapper[30420]: I0318 10:15:15.111514 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.111748 master-0 kubenswrapper[30420]: I0318 10:15:15.111710 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.111905 master-0 kubenswrapper[30420]: I0318 10:15:15.111871 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.112387 master-0 kubenswrapper[30420]: I0318 10:15:15.112358 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.112815 master-0 kubenswrapper[30420]: I0318 10:15:15.112777 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.113691 master-0 kubenswrapper[30420]: I0318 10:15:15.113194 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config-out\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.113691 master-0 kubenswrapper[30420]: I0318 10:15:15.113642 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.113809 master-0 kubenswrapper[30420]: I0318 10:15:15.113751 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.114063 master-0 kubenswrapper[30420]: I0318 10:15:15.114027 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.114221 master-0 kubenswrapper[30420]: I0318 10:15:15.114182 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.114284 master-0 kubenswrapper[30420]: I0318 10:15:15.114226 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.114801 master-0 kubenswrapper[30420]: I0318 10:15:15.114760 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.117779 master-0 kubenswrapper[30420]: I0318 10:15:15.115628 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-web-config\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.127691 master-0 kubenswrapper[30420]: I0318 10:15:15.127640 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.134004 master-0 kubenswrapper[30420]: I0318 10:15:15.133963 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95sh2\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-kube-api-access-95sh2\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.268630 master-0 kubenswrapper[30420]: I0318 10:15:15.268564 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" event={"ID":"274f890d-dc38-4220-98a2-357d86249c63","Type":"ContainerStarted","Data":"72ae7f927f87a52f42aa67dd591ba56e2ece01d8ddab077657a5e7d0e5d26c12"} Mar 18 10:15:15.268630 master-0 kubenswrapper[30420]: I0318 10:15:15.268622 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" event={"ID":"274f890d-dc38-4220-98a2-357d86249c63","Type":"ContainerStarted","Data":"f5857da584490191ddff6faee3e6d1e8ab21f605b3d538cd8b130a6bb042f1ed"} Mar 18 10:15:15.268630 master-0 kubenswrapper[30420]: I0318 10:15:15.268635 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" event={"ID":"274f890d-dc38-4220-98a2-357d86249c63","Type":"ContainerStarted","Data":"ba43afd22a545785e7771fa7793d4ffd3424937582cbcf8fc08abb1d9da0414d"} Mar 18 10:15:15.269289 master-0 kubenswrapper[30420]: I0318 10:15:15.268673 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:15.313186 master-0 kubenswrapper[30420]: I0318 10:15:15.313114 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" podStartSLOduration=1.554203539 podStartE2EDuration="5.313095798s" podCreationTimestamp="2026-03-18 10:15:10 +0000 UTC" firstStartedPulling="2026-03-18 10:15:11.011044091 +0000 UTC m=+275.063790020" lastFinishedPulling="2026-03-18 10:15:14.76993626 +0000 UTC m=+278.822682279" observedRunningTime="2026-03-18 10:15:15.308203676 +0000 UTC m=+279.360949605" watchObservedRunningTime="2026-03-18 10:15:15.313095798 +0000 UTC m=+279.365841727" Mar 18 10:15:15.615390 master-0 kubenswrapper[30420]: I0318 10:15:15.615243 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.615390 master-0 kubenswrapper[30420]: I0318 10:15:15.615328 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:15.615671 master-0 kubenswrapper[30420]: E0318 10:15:15.615462 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:15.615671 master-0 kubenswrapper[30420]: E0318 10:15:15.615460 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 10:15:15.615671 master-0 kubenswrapper[30420]: E0318 10:15:15.615516 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:16.615502933 +0000 UTC m=+280.668248852 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:15.615671 master-0 kubenswrapper[30420]: E0318 10:15:15.615588 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:16.615552564 +0000 UTC m=+280.668298533 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-tls" not found Mar 18 10:15:16.630443 master-0 kubenswrapper[30420]: I0318 10:15:16.630381 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:16.630980 master-0 kubenswrapper[30420]: I0318 10:15:16.630473 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:16.630980 master-0 kubenswrapper[30420]: E0318 10:15:16.630585 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 10:15:16.630980 master-0 kubenswrapper[30420]: E0318 10:15:16.630640 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:16.630980 master-0 kubenswrapper[30420]: E0318 10:15:16.630659 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:18.630639937 +0000 UTC m=+282.683385866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-tls" not found Mar 18 10:15:16.630980 master-0 kubenswrapper[30420]: E0318 10:15:16.630691 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:18.630678358 +0000 UTC m=+282.683424287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:17.137791 master-0 kubenswrapper[30420]: I0318 10:15:17.137701 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:17.138227 master-0 kubenswrapper[30420]: E0318 10:15:17.137898 30420 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 10:15:17.138227 master-0 kubenswrapper[30420]: E0318 10:15:17.137965 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls podName:9adfdd99-ef2a-4698-8ef5-c2f97c4b6761 nodeName:}" failed. No retries permitted until 2026-03-18 10:15:25.137946528 +0000 UTC m=+289.190692457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761") : secret "alertmanager-main-tls" not found Mar 18 10:15:18.669876 master-0 kubenswrapper[30420]: I0318 10:15:18.669785 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:18.670642 master-0 kubenswrapper[30420]: E0318 10:15:18.670035 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 10:15:18.670642 master-0 kubenswrapper[30420]: E0318 10:15:18.670190 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:22.670164397 +0000 UTC m=+286.722910356 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-tls" not found Mar 18 10:15:18.670642 master-0 kubenswrapper[30420]: E0318 10:15:18.670198 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:18.670642 master-0 kubenswrapper[30420]: E0318 10:15:18.670264 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:22.670246489 +0000 UTC m=+286.722992428 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:18.670642 master-0 kubenswrapper[30420]: I0318 10:15:18.670047 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:20.594186 master-0 kubenswrapper[30420]: I0318 10:15:20.594042 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5cfdd55bb7-8m5wk" Mar 18 10:15:22.733924 master-0 kubenswrapper[30420]: I0318 10:15:22.733854 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:22.734933 master-0 kubenswrapper[30420]: E0318 10:15:22.734081 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 10:15:22.735094 master-0 kubenswrapper[30420]: E0318 10:15:22.735030 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:30.73498876 +0000 UTC m=+294.787734729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-tls" not found Mar 18 10:15:22.735332 master-0 kubenswrapper[30420]: I0318 10:15:22.735280 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:22.735647 master-0 kubenswrapper[30420]: E0318 10:15:22.735582 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:22.735792 master-0 kubenswrapper[30420]: E0318 10:15:22.735729 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:30.735691518 +0000 UTC m=+294.788437477 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:25.176972 master-0 kubenswrapper[30420]: I0318 10:15:25.176877 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:25.178118 master-0 kubenswrapper[30420]: E0318 10:15:25.178065 30420 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 10:15:25.178451 master-0 kubenswrapper[30420]: E0318 10:15:25.178423 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls podName:9adfdd99-ef2a-4698-8ef5-c2f97c4b6761 nodeName:}" failed. No retries permitted until 2026-03-18 10:15:41.178386883 +0000 UTC m=+305.231132872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761") : secret "alertmanager-main-tls" not found Mar 18 10:15:30.768674 master-0 kubenswrapper[30420]: I0318 10:15:30.768604 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:30.769559 master-0 kubenswrapper[30420]: I0318 10:15:30.769529 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:30.769751 master-0 kubenswrapper[30420]: E0318 10:15:30.768953 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 10:15:30.769951 master-0 kubenswrapper[30420]: E0318 10:15:30.769749 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:30.770030 master-0 kubenswrapper[30420]: E0318 10:15:30.769915 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:46.769892958 +0000 UTC m=+310.822638907 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-tls" not found Mar 18 10:15:30.770096 master-0 kubenswrapper[30420]: E0318 10:15:30.770034 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:15:46.770004611 +0000 UTC m=+310.822750570 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:33.446959 master-0 kubenswrapper[30420]: I0318 10:15:33.446880 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:33.447884 master-0 kubenswrapper[30420]: I0318 10:15:33.446979 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:33.458626 master-0 kubenswrapper[30420]: I0318 10:15:33.458545 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:34.436425 master-0 kubenswrapper[30420]: I0318 10:15:34.436370 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7d8bb64c78-vvvft" Mar 18 10:15:36.178877 master-0 kubenswrapper[30420]: I0318 10:15:36.178812 30420 kubelet.go:1505] "Image garbage collection succeeded" Mar 18 10:15:41.271304 master-0 kubenswrapper[30420]: I0318 10:15:41.271145 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:15:41.273323 master-0 kubenswrapper[30420]: E0318 10:15:41.273248 30420 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 10:15:41.273669 master-0 kubenswrapper[30420]: E0318 10:15:41.273630 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls podName:9adfdd99-ef2a-4698-8ef5-c2f97c4b6761 nodeName:}" failed. No retries permitted until 2026-03-18 10:16:13.273587164 +0000 UTC m=+337.326333143 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761") : secret "alertmanager-main-tls" not found Mar 18 10:15:42.268304 master-0 kubenswrapper[30420]: I0318 10:15:42.267749 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-cmj6p"] Mar 18 10:15:42.269282 master-0 kubenswrapper[30420]: I0318 10:15:42.269153 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.272212 master-0 kubenswrapper[30420]: I0318 10:15:42.272149 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 10:15:42.272633 master-0 kubenswrapper[30420]: I0318 10:15:42.272238 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 10:15:42.272633 master-0 kubenswrapper[30420]: I0318 10:15:42.272291 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 10:15:42.272732 master-0 kubenswrapper[30420]: I0318 10:15:42.272698 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 10:15:42.280852 master-0 kubenswrapper[30420]: I0318 10:15:42.279924 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 10:15:42.288293 master-0 kubenswrapper[30420]: I0318 10:15:42.286718 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-cmj6p"] Mar 18 10:15:42.291854 master-0 kubenswrapper[30420]: I0318 10:15:42.291795 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xxn\" (UniqueName: \"kubernetes.io/projected/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-kube-api-access-z6xxn\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.291927 master-0 kubenswrapper[30420]: I0318 10:15:42.291872 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-config\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.291980 master-0 kubenswrapper[30420]: I0318 10:15:42.291952 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-serving-cert\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.292036 master-0 kubenswrapper[30420]: I0318 10:15:42.292019 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-trusted-ca\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.393470 master-0 kubenswrapper[30420]: I0318 10:15:42.393408 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-config\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.393677 master-0 kubenswrapper[30420]: I0318 10:15:42.393547 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-serving-cert\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.393677 master-0 kubenswrapper[30420]: I0318 10:15:42.393642 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-trusted-ca\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.393771 master-0 kubenswrapper[30420]: I0318 10:15:42.393748 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6xxn\" (UniqueName: \"kubernetes.io/projected/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-kube-api-access-z6xxn\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.395394 master-0 kubenswrapper[30420]: I0318 10:15:42.395343 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-config\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.395532 master-0 kubenswrapper[30420]: I0318 10:15:42.395478 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-trusted-ca\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.399219 master-0 kubenswrapper[30420]: I0318 10:15:42.399192 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-serving-cert\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.413712 master-0 kubenswrapper[30420]: I0318 10:15:42.413670 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6xxn\" (UniqueName: \"kubernetes.io/projected/25a8ccb6-ea69-45bf-b460-1b887c5b3f22-kube-api-access-z6xxn\") pod \"console-operator-76b6568d85-cmj6p\" (UID: \"25a8ccb6-ea69-45bf-b460-1b887c5b3f22\") " pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:42.596630 master-0 kubenswrapper[30420]: I0318 10:15:42.596575 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:43.049213 master-0 kubenswrapper[30420]: I0318 10:15:43.049061 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-cmj6p"] Mar 18 10:15:43.058661 master-0 kubenswrapper[30420]: W0318 10:15:43.058574 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25a8ccb6_ea69_45bf_b460_1b887c5b3f22.slice/crio-d8e0ccdb11dada1e04b6640cc4602d5c3207e58629ae41378a7d53c346b9f9bc WatchSource:0}: Error finding container d8e0ccdb11dada1e04b6640cc4602d5c3207e58629ae41378a7d53c346b9f9bc: Status 404 returned error can't find the container with id d8e0ccdb11dada1e04b6640cc4602d5c3207e58629ae41378a7d53c346b9f9bc Mar 18 10:15:43.502661 master-0 kubenswrapper[30420]: I0318 10:15:43.502497 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" event={"ID":"25a8ccb6-ea69-45bf-b460-1b887c5b3f22","Type":"ContainerStarted","Data":"d8e0ccdb11dada1e04b6640cc4602d5c3207e58629ae41378a7d53c346b9f9bc"} Mar 18 10:15:46.534402 master-0 kubenswrapper[30420]: I0318 10:15:46.534328 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/0.log" Mar 18 10:15:46.535331 master-0 kubenswrapper[30420]: I0318 10:15:46.534437 30420 generic.go:334] "Generic (PLEG): container finished" podID="25a8ccb6-ea69-45bf-b460-1b887c5b3f22" containerID="c5b4c8562916f4c7e0d277a1b223b1979aaac698e4d8eb1b91e1c7ec4f51f9ae" exitCode=255 Mar 18 10:15:46.535331 master-0 kubenswrapper[30420]: I0318 10:15:46.534587 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" event={"ID":"25a8ccb6-ea69-45bf-b460-1b887c5b3f22","Type":"ContainerDied","Data":"c5b4c8562916f4c7e0d277a1b223b1979aaac698e4d8eb1b91e1c7ec4f51f9ae"} Mar 18 10:15:46.535331 master-0 kubenswrapper[30420]: I0318 10:15:46.535129 30420 scope.go:117] "RemoveContainer" containerID="c5b4c8562916f4c7e0d277a1b223b1979aaac698e4d8eb1b91e1c7ec4f51f9ae" Mar 18 10:15:46.862249 master-0 kubenswrapper[30420]: I0318 10:15:46.862158 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:46.862457 master-0 kubenswrapper[30420]: I0318 10:15:46.862409 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:15:46.862541 master-0 kubenswrapper[30420]: E0318 10:15:46.862406 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 10:15:46.862606 master-0 kubenswrapper[30420]: E0318 10:15:46.862478 30420 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:46.862606 master-0 kubenswrapper[30420]: E0318 10:15:46.862595 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:16:18.862570855 +0000 UTC m=+342.915316784 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-tls" not found Mar 18 10:15:46.862709 master-0 kubenswrapper[30420]: E0318 10:15:46.862674 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls podName:82595633-1fc3-4dc7-a5bc-ce391c4d743d nodeName:}" failed. No retries permitted until 2026-03-18 10:16:18.862643737 +0000 UTC m=+342.915389846 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 10:15:47.544673 master-0 kubenswrapper[30420]: I0318 10:15:47.544604 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/1.log" Mar 18 10:15:47.546454 master-0 kubenswrapper[30420]: I0318 10:15:47.546390 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/0.log" Mar 18 10:15:47.546610 master-0 kubenswrapper[30420]: I0318 10:15:47.546457 30420 generic.go:334] "Generic (PLEG): container finished" podID="25a8ccb6-ea69-45bf-b460-1b887c5b3f22" containerID="acb9c0765d27c6e3f56fa706e2ef921ab8a5dfaf21f741f18bd0b4285751727c" exitCode=255 Mar 18 10:15:47.546610 master-0 kubenswrapper[30420]: I0318 10:15:47.546495 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" event={"ID":"25a8ccb6-ea69-45bf-b460-1b887c5b3f22","Type":"ContainerDied","Data":"acb9c0765d27c6e3f56fa706e2ef921ab8a5dfaf21f741f18bd0b4285751727c"} Mar 18 10:15:47.546610 master-0 kubenswrapper[30420]: I0318 10:15:47.546545 30420 scope.go:117] "RemoveContainer" containerID="c5b4c8562916f4c7e0d277a1b223b1979aaac698e4d8eb1b91e1c7ec4f51f9ae" Mar 18 10:15:47.547368 master-0 kubenswrapper[30420]: I0318 10:15:47.547312 30420 scope.go:117] "RemoveContainer" containerID="acb9c0765d27c6e3f56fa706e2ef921ab8a5dfaf21f741f18bd0b4285751727c" Mar 18 10:15:47.547763 master-0 kubenswrapper[30420]: E0318 10:15:47.547705 30420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-76b6568d85-cmj6p_openshift-console-operator(25a8ccb6-ea69-45bf-b460-1b887c5b3f22)\"" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" podUID="25a8ccb6-ea69-45bf-b460-1b887c5b3f22" Mar 18 10:15:48.566373 master-0 kubenswrapper[30420]: I0318 10:15:48.566322 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/1.log" Mar 18 10:15:48.567128 master-0 kubenswrapper[30420]: I0318 10:15:48.566869 30420 scope.go:117] "RemoveContainer" containerID="acb9c0765d27c6e3f56fa706e2ef921ab8a5dfaf21f741f18bd0b4285751727c" Mar 18 10:15:48.567128 master-0 kubenswrapper[30420]: E0318 10:15:48.567087 30420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-76b6568d85-cmj6p_openshift-console-operator(25a8ccb6-ea69-45bf-b460-1b887c5b3f22)\"" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" podUID="25a8ccb6-ea69-45bf-b460-1b887c5b3f22" Mar 18 10:15:52.597380 master-0 kubenswrapper[30420]: I0318 10:15:52.597289 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:52.597380 master-0 kubenswrapper[30420]: I0318 10:15:52.597355 30420 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:15:52.598959 master-0 kubenswrapper[30420]: I0318 10:15:52.597964 30420 scope.go:117] "RemoveContainer" containerID="acb9c0765d27c6e3f56fa706e2ef921ab8a5dfaf21f741f18bd0b4285751727c" Mar 18 10:15:52.598959 master-0 kubenswrapper[30420]: E0318 10:15:52.598239 30420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-76b6568d85-cmj6p_openshift-console-operator(25a8ccb6-ea69-45bf-b460-1b887c5b3f22)\"" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" podUID="25a8ccb6-ea69-45bf-b460-1b887c5b3f22" Mar 18 10:16:06.175114 master-0 kubenswrapper[30420]: I0318 10:16:06.175048 30420 scope.go:117] "RemoveContainer" containerID="acb9c0765d27c6e3f56fa706e2ef921ab8a5dfaf21f741f18bd0b4285751727c" Mar 18 10:16:06.733728 master-0 kubenswrapper[30420]: I0318 10:16:06.733609 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/2.log" Mar 18 10:16:06.735027 master-0 kubenswrapper[30420]: I0318 10:16:06.734998 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/1.log" Mar 18 10:16:06.735082 master-0 kubenswrapper[30420]: I0318 10:16:06.735040 30420 generic.go:334] "Generic (PLEG): container finished" podID="25a8ccb6-ea69-45bf-b460-1b887c5b3f22" containerID="e743793132a61daaf8dcb3981a83ae645d5f25c89610d999d1a3b76876c05db8" exitCode=255 Mar 18 10:16:06.735116 master-0 kubenswrapper[30420]: I0318 10:16:06.735073 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" event={"ID":"25a8ccb6-ea69-45bf-b460-1b887c5b3f22","Type":"ContainerDied","Data":"e743793132a61daaf8dcb3981a83ae645d5f25c89610d999d1a3b76876c05db8"} Mar 18 10:16:06.735116 master-0 kubenswrapper[30420]: I0318 10:16:06.735113 30420 scope.go:117] "RemoveContainer" containerID="acb9c0765d27c6e3f56fa706e2ef921ab8a5dfaf21f741f18bd0b4285751727c" Mar 18 10:16:06.735697 master-0 kubenswrapper[30420]: I0318 10:16:06.735667 30420 scope.go:117] "RemoveContainer" containerID="e743793132a61daaf8dcb3981a83ae645d5f25c89610d999d1a3b76876c05db8" Mar 18 10:16:06.735952 master-0 kubenswrapper[30420]: E0318 10:16:06.735918 30420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-76b6568d85-cmj6p_openshift-console-operator(25a8ccb6-ea69-45bf-b460-1b887c5b3f22)\"" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" podUID="25a8ccb6-ea69-45bf-b460-1b887c5b3f22" Mar 18 10:16:07.745379 master-0 kubenswrapper[30420]: I0318 10:16:07.745333 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/2.log" Mar 18 10:16:10.076853 master-0 kubenswrapper[30420]: I0318 10:16:10.076735 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-64659f7487-wmtsx"] Mar 18 10:16:10.078406 master-0 kubenswrapper[30420]: I0318 10:16:10.078350 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" Mar 18 10:16:10.080728 master-0 kubenswrapper[30420]: I0318 10:16:10.080588 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 10:16:10.089012 master-0 kubenswrapper[30420]: I0318 10:16:10.088944 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-xbzmc" Mar 18 10:16:10.110687 master-0 kubenswrapper[30420]: I0318 10:16:10.110624 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-64659f7487-wmtsx"] Mar 18 10:16:10.153404 master-0 kubenswrapper[30420]: I0318 10:16:10.153303 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/2d4d730b-875c-4f6f-92b7-3c0e1035fdd6-monitoring-plugin-cert\") pod \"monitoring-plugin-64659f7487-wmtsx\" (UID: \"2d4d730b-875c-4f6f-92b7-3c0e1035fdd6\") " pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" Mar 18 10:16:10.256428 master-0 kubenswrapper[30420]: I0318 10:16:10.256222 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/2d4d730b-875c-4f6f-92b7-3c0e1035fdd6-monitoring-plugin-cert\") pod \"monitoring-plugin-64659f7487-wmtsx\" (UID: \"2d4d730b-875c-4f6f-92b7-3c0e1035fdd6\") " pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" Mar 18 10:16:10.262463 master-0 kubenswrapper[30420]: I0318 10:16:10.262391 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/2d4d730b-875c-4f6f-92b7-3c0e1035fdd6-monitoring-plugin-cert\") pod \"monitoring-plugin-64659f7487-wmtsx\" (UID: \"2d4d730b-875c-4f6f-92b7-3c0e1035fdd6\") " pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" Mar 18 10:16:10.423986 master-0 kubenswrapper[30420]: I0318 10:16:10.423811 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" Mar 18 10:16:10.865504 master-0 kubenswrapper[30420]: I0318 10:16:10.865434 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-64659f7487-wmtsx"] Mar 18 10:16:10.866670 master-0 kubenswrapper[30420]: W0318 10:16:10.866623 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d4d730b_875c_4f6f_92b7_3c0e1035fdd6.slice/crio-bb9fae1b9f3bc976dd44b392fc5eae3e1e3c2c50e263eb1f1645dc124edd7aba WatchSource:0}: Error finding container bb9fae1b9f3bc976dd44b392fc5eae3e1e3c2c50e263eb1f1645dc124edd7aba: Status 404 returned error can't find the container with id bb9fae1b9f3bc976dd44b392fc5eae3e1e3c2c50e263eb1f1645dc124edd7aba Mar 18 10:16:11.771525 master-0 kubenswrapper[30420]: I0318 10:16:11.771457 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" event={"ID":"2d4d730b-875c-4f6f-92b7-3c0e1035fdd6","Type":"ContainerStarted","Data":"bb9fae1b9f3bc976dd44b392fc5eae3e1e3c2c50e263eb1f1645dc124edd7aba"} Mar 18 10:16:12.597131 master-0 kubenswrapper[30420]: I0318 10:16:12.597042 30420 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:16:12.597306 master-0 kubenswrapper[30420]: I0318 10:16:12.597173 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:16:12.598190 master-0 kubenswrapper[30420]: I0318 10:16:12.598144 30420 scope.go:117] "RemoveContainer" containerID="e743793132a61daaf8dcb3981a83ae645d5f25c89610d999d1a3b76876c05db8" Mar 18 10:16:12.598713 master-0 kubenswrapper[30420]: E0318 10:16:12.598629 30420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-76b6568d85-cmj6p_openshift-console-operator(25a8ccb6-ea69-45bf-b460-1b887c5b3f22)\"" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" podUID="25a8ccb6-ea69-45bf-b460-1b887c5b3f22" Mar 18 10:16:12.779639 master-0 kubenswrapper[30420]: I0318 10:16:12.779487 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" event={"ID":"2d4d730b-875c-4f6f-92b7-3c0e1035fdd6","Type":"ContainerStarted","Data":"847c937260e2dfe4109fb301fce9eb8d8d28293b5909d97871c57a1aa761803f"} Mar 18 10:16:12.780344 master-0 kubenswrapper[30420]: I0318 10:16:12.780270 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" Mar 18 10:16:12.788501 master-0 kubenswrapper[30420]: I0318 10:16:12.788438 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" Mar 18 10:16:12.803716 master-0 kubenswrapper[30420]: I0318 10:16:12.803577 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-64659f7487-wmtsx" podStartSLOduration=1.158497841 podStartE2EDuration="2.803556493s" podCreationTimestamp="2026-03-18 10:16:10 +0000 UTC" firstStartedPulling="2026-03-18 10:16:10.868685681 +0000 UTC m=+334.921431610" lastFinishedPulling="2026-03-18 10:16:12.513744323 +0000 UTC m=+336.566490262" observedRunningTime="2026-03-18 10:16:12.798984668 +0000 UTC m=+336.851730637" watchObservedRunningTime="2026-03-18 10:16:12.803556493 +0000 UTC m=+336.856302432" Mar 18 10:16:13.310349 master-0 kubenswrapper[30420]: I0318 10:16:13.310243 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:16:13.316998 master-0 kubenswrapper[30420]: I0318 10:16:13.315349 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:16:13.526075 master-0 kubenswrapper[30420]: I0318 10:16:13.525973 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:16:14.037712 master-0 kubenswrapper[30420]: I0318 10:16:14.037670 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 10:16:14.798574 master-0 kubenswrapper[30420]: I0318 10:16:14.798514 30420 generic.go:334] "Generic (PLEG): container finished" podID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerID="c67af3281f67d00b051eebcba127eebcaf44b1264ab590764fd8664d9451e0a7" exitCode=0 Mar 18 10:16:14.798876 master-0 kubenswrapper[30420]: I0318 10:16:14.798568 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerDied","Data":"c67af3281f67d00b051eebcba127eebcaf44b1264ab590764fd8664d9451e0a7"} Mar 18 10:16:14.798876 master-0 kubenswrapper[30420]: I0318 10:16:14.798645 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerStarted","Data":"c0cbc9d3c69c7e7e7f33f0b0ddf267e0bec1122a1c08a1c35a8479db1d68b27c"} Mar 18 10:16:16.817437 master-0 kubenswrapper[30420]: I0318 10:16:16.817359 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerStarted","Data":"b3e3abcb3eed9e3a76ccda69fb88c81863a9b6023fc0895bac0a49ac23f0964d"} Mar 18 10:16:16.817919 master-0 kubenswrapper[30420]: I0318 10:16:16.817449 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerStarted","Data":"6f0a607d3d4ed38bb00f164af6e11bcd0b44d7197e12694411a958ee8be276f5"} Mar 18 10:16:17.832562 master-0 kubenswrapper[30420]: I0318 10:16:17.832470 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerStarted","Data":"e9eba945cae2ffe1611d676653f381605ef3bc3f8ae1008a52eab79b2e860df4"} Mar 18 10:16:17.832562 master-0 kubenswrapper[30420]: I0318 10:16:17.832527 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerStarted","Data":"3e10f8a078a0a63498335680d1ef4600429c447d0025a7efed9dc9c399363a43"} Mar 18 10:16:17.832562 master-0 kubenswrapper[30420]: I0318 10:16:17.832540 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerStarted","Data":"e4236a5bb78301349ee653952bd3cb395f1b39a85f8de46e23e28e77a666e3c7"} Mar 18 10:16:17.832562 master-0 kubenswrapper[30420]: I0318 10:16:17.832552 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerStarted","Data":"b4681dda832d085e385da86341cb24481abad420db2010ec43eb55f255a7bff3"} Mar 18 10:16:17.880697 master-0 kubenswrapper[30420]: I0318 10:16:17.880597 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=67.261884784 podStartE2EDuration="1m8.880555757s" podCreationTimestamp="2026-03-18 10:15:09 +0000 UTC" firstStartedPulling="2026-03-18 10:16:14.801599804 +0000 UTC m=+338.854345733" lastFinishedPulling="2026-03-18 10:16:16.420270777 +0000 UTC m=+340.473016706" observedRunningTime="2026-03-18 10:16:17.868450294 +0000 UTC m=+341.921196223" watchObservedRunningTime="2026-03-18 10:16:17.880555757 +0000 UTC m=+341.933301686" Mar 18 10:16:18.907855 master-0 kubenswrapper[30420]: I0318 10:16:18.907745 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:16:18.909460 master-0 kubenswrapper[30420]: I0318 10:16:18.909222 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:16:18.913118 master-0 kubenswrapper[30420]: I0318 10:16:18.913064 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:16:18.914302 master-0 kubenswrapper[30420]: I0318 10:16:18.914254 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:16:19.119198 master-0 kubenswrapper[30420]: I0318 10:16:19.119106 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:16:19.614345 master-0 kubenswrapper[30420]: I0318 10:16:19.614273 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 10:16:19.623576 master-0 kubenswrapper[30420]: W0318 10:16:19.623514 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82595633_1fc3_4dc7_a5bc_ce391c4d743d.slice/crio-a769b1ef229aa220d3080039e58ca071fec3584c0e7796349294fecf9958f89d WatchSource:0}: Error finding container a769b1ef229aa220d3080039e58ca071fec3584c0e7796349294fecf9958f89d: Status 404 returned error can't find the container with id a769b1ef229aa220d3080039e58ca071fec3584c0e7796349294fecf9958f89d Mar 18 10:16:19.847188 master-0 kubenswrapper[30420]: I0318 10:16:19.847123 30420 generic.go:334] "Generic (PLEG): container finished" podID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerID="b10262cb013c2d9967201332c3435cebf34f30ce3594cb1f99a512f176d9e38c" exitCode=0 Mar 18 10:16:19.847188 master-0 kubenswrapper[30420]: I0318 10:16:19.847166 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerDied","Data":"b10262cb013c2d9967201332c3435cebf34f30ce3594cb1f99a512f176d9e38c"} Mar 18 10:16:19.847468 master-0 kubenswrapper[30420]: I0318 10:16:19.847215 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerStarted","Data":"a769b1ef229aa220d3080039e58ca071fec3584c0e7796349294fecf9958f89d"} Mar 18 10:16:23.895340 master-0 kubenswrapper[30420]: I0318 10:16:23.895292 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerStarted","Data":"a7d7f851f4c1584aa500215adc79e22cec4d88779ff6943dd801eb2dcf6d6097"} Mar 18 10:16:24.911734 master-0 kubenswrapper[30420]: I0318 10:16:24.911662 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerStarted","Data":"e16addfd28e3f5280697035643ff6b4e9e9620e0c0365e8d1b364e4a59da7ee7"} Mar 18 10:16:24.912670 master-0 kubenswrapper[30420]: I0318 10:16:24.911758 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerStarted","Data":"677c8e0fe1cb41f8869ff6affa1a09ada04455dd6fb0bafbd39b72e228a5bed9"} Mar 18 10:16:24.912670 master-0 kubenswrapper[30420]: I0318 10:16:24.911781 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerStarted","Data":"99ea07cc70b5b202dcc0d5bb6ffbd3c680b98550ccd0e11007931357c0554eb1"} Mar 18 10:16:24.912670 master-0 kubenswrapper[30420]: I0318 10:16:24.911800 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerStarted","Data":"7a02ae2649c338a61edde895fd21b3f44e6f25ebc4803ff3f064ad18b3962b9c"} Mar 18 10:16:24.912670 master-0 kubenswrapper[30420]: I0318 10:16:24.911819 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerStarted","Data":"dcf2b1ec05bab2e946c1cab6fd5813ae02216ee988779d859d784f1aefec0d8d"} Mar 18 10:16:24.953931 master-0 kubenswrapper[30420]: I0318 10:16:24.952606 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=67.228096278 podStartE2EDuration="1m10.952577568s" podCreationTimestamp="2026-03-18 10:15:14 +0000 UTC" firstStartedPulling="2026-03-18 10:16:19.848497206 +0000 UTC m=+343.901243135" lastFinishedPulling="2026-03-18 10:16:23.572978456 +0000 UTC m=+347.625724425" observedRunningTime="2026-03-18 10:16:24.950712141 +0000 UTC m=+349.003458120" watchObservedRunningTime="2026-03-18 10:16:24.952577568 +0000 UTC m=+349.005323547" Mar 18 10:16:27.167942 master-0 kubenswrapper[30420]: I0318 10:16:27.167870 30420 scope.go:117] "RemoveContainer" containerID="e743793132a61daaf8dcb3981a83ae645d5f25c89610d999d1a3b76876c05db8" Mar 18 10:16:27.957656 master-0 kubenswrapper[30420]: I0318 10:16:27.957544 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/2.log" Mar 18 10:16:27.958020 master-0 kubenswrapper[30420]: I0318 10:16:27.957684 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" event={"ID":"25a8ccb6-ea69-45bf-b460-1b887c5b3f22","Type":"ContainerStarted","Data":"a054620276e6744e7e1d8d7d84c48b29ee04e1dc2680846a44253253f842675f"} Mar 18 10:16:27.958350 master-0 kubenswrapper[30420]: I0318 10:16:27.958271 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:16:27.988720 master-0 kubenswrapper[30420]: I0318 10:16:27.988604 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" podStartSLOduration=43.653175493 podStartE2EDuration="45.988575275s" podCreationTimestamp="2026-03-18 10:15:42 +0000 UTC" firstStartedPulling="2026-03-18 10:15:43.063266373 +0000 UTC m=+307.116012342" lastFinishedPulling="2026-03-18 10:15:45.398666195 +0000 UTC m=+309.451412124" observedRunningTime="2026-03-18 10:16:27.985913808 +0000 UTC m=+352.038659767" watchObservedRunningTime="2026-03-18 10:16:27.988575275 +0000 UTC m=+352.041321234" Mar 18 10:16:28.394968 master-0 kubenswrapper[30420]: I0318 10:16:28.394422 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-76b6568d85-cmj6p" Mar 18 10:16:28.608158 master-0 kubenswrapper[30420]: I0318 10:16:28.608090 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-66b8ffb895-wg4k5"] Mar 18 10:16:28.608881 master-0 kubenswrapper[30420]: I0318 10:16:28.608860 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-wg4k5" Mar 18 10:16:28.611042 master-0 kubenswrapper[30420]: I0318 10:16:28.611009 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 10:16:28.611166 master-0 kubenswrapper[30420]: I0318 10:16:28.611126 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 10:16:28.642922 master-0 kubenswrapper[30420]: I0318 10:16:28.642864 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-wg4k5"] Mar 18 10:16:28.677367 master-0 kubenswrapper[30420]: I0318 10:16:28.675698 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxm7b\" (UniqueName: \"kubernetes.io/projected/5c77e26d-a46a-4552-88b8-2c8e3473437e-kube-api-access-gxm7b\") pod \"downloads-66b8ffb895-wg4k5\" (UID: \"5c77e26d-a46a-4552-88b8-2c8e3473437e\") " pod="openshift-console/downloads-66b8ffb895-wg4k5" Mar 18 10:16:28.776694 master-0 kubenswrapper[30420]: I0318 10:16:28.776617 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxm7b\" (UniqueName: \"kubernetes.io/projected/5c77e26d-a46a-4552-88b8-2c8e3473437e-kube-api-access-gxm7b\") pod \"downloads-66b8ffb895-wg4k5\" (UID: \"5c77e26d-a46a-4552-88b8-2c8e3473437e\") " pod="openshift-console/downloads-66b8ffb895-wg4k5" Mar 18 10:16:28.796502 master-0 kubenswrapper[30420]: I0318 10:16:28.796443 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxm7b\" (UniqueName: \"kubernetes.io/projected/5c77e26d-a46a-4552-88b8-2c8e3473437e-kube-api-access-gxm7b\") pod \"downloads-66b8ffb895-wg4k5\" (UID: \"5c77e26d-a46a-4552-88b8-2c8e3473437e\") " pod="openshift-console/downloads-66b8ffb895-wg4k5" Mar 18 10:16:28.924912 master-0 kubenswrapper[30420]: I0318 10:16:28.924853 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-wg4k5" Mar 18 10:16:29.120076 master-0 kubenswrapper[30420]: I0318 10:16:29.119996 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:16:29.359877 master-0 kubenswrapper[30420]: I0318 10:16:29.359761 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-wg4k5"] Mar 18 10:16:29.981337 master-0 kubenswrapper[30420]: I0318 10:16:29.981135 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-wg4k5" event={"ID":"5c77e26d-a46a-4552-88b8-2c8e3473437e","Type":"ContainerStarted","Data":"a8151674725f97f2d81220f7ac0bd7612bf05ccb8c2ad79e8a722cb6c6c6e4b5"} Mar 18 10:16:35.931077 master-0 kubenswrapper[30420]: I0318 10:16:35.930991 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74bbfbc495-9qrz2"] Mar 18 10:16:35.933004 master-0 kubenswrapper[30420]: I0318 10:16:35.932897 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:35.938996 master-0 kubenswrapper[30420]: I0318 10:16:35.938928 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 10:16:35.938996 master-0 kubenswrapper[30420]: I0318 10:16:35.938982 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 10:16:35.939187 master-0 kubenswrapper[30420]: I0318 10:16:35.939026 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 10:16:35.939187 master-0 kubenswrapper[30420]: I0318 10:16:35.939030 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 10:16:35.939290 master-0 kubenswrapper[30420]: I0318 10:16:35.939173 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 10:16:35.951515 master-0 kubenswrapper[30420]: I0318 10:16:35.951384 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74bbfbc495-9qrz2"] Mar 18 10:16:36.068336 master-0 kubenswrapper[30420]: I0318 10:16:36.068267 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-console-config\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.068336 master-0 kubenswrapper[30420]: I0318 10:16:36.068331 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-service-ca\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.068651 master-0 kubenswrapper[30420]: I0318 10:16:36.068351 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-serving-cert\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.068651 master-0 kubenswrapper[30420]: I0318 10:16:36.068420 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-oauth-config\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.068651 master-0 kubenswrapper[30420]: I0318 10:16:36.068474 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-oauth-serving-cert\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.068651 master-0 kubenswrapper[30420]: I0318 10:16:36.068495 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkntc\" (UniqueName: \"kubernetes.io/projected/05094271-f491-4119-a9db-88b7fe4f7f3c-kube-api-access-gkntc\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.169393 master-0 kubenswrapper[30420]: I0318 10:16:36.169330 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkntc\" (UniqueName: \"kubernetes.io/projected/05094271-f491-4119-a9db-88b7fe4f7f3c-kube-api-access-gkntc\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.169602 master-0 kubenswrapper[30420]: I0318 10:16:36.169543 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-console-config\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.169797 master-0 kubenswrapper[30420]: I0318 10:16:36.169740 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-service-ca\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.169919 master-0 kubenswrapper[30420]: I0318 10:16:36.169899 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-serving-cert\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.169997 master-0 kubenswrapper[30420]: I0318 10:16:36.169975 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-oauth-config\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.170099 master-0 kubenswrapper[30420]: I0318 10:16:36.170072 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-oauth-serving-cert\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.173173 master-0 kubenswrapper[30420]: I0318 10:16:36.173135 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 10:16:36.173267 master-0 kubenswrapper[30420]: I0318 10:16:36.173141 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 10:16:36.173267 master-0 kubenswrapper[30420]: I0318 10:16:36.173203 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 10:16:36.173459 master-0 kubenswrapper[30420]: I0318 10:16:36.173417 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 10:16:36.173521 master-0 kubenswrapper[30420]: I0318 10:16:36.173428 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 10:16:36.182937 master-0 kubenswrapper[30420]: I0318 10:16:36.182025 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-oauth-serving-cert\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.182937 master-0 kubenswrapper[30420]: I0318 10:16:36.182169 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-service-ca\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.182937 master-0 kubenswrapper[30420]: I0318 10:16:36.182212 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-console-config\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.184566 master-0 kubenswrapper[30420]: I0318 10:16:36.184518 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-serving-cert\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.192699 master-0 kubenswrapper[30420]: I0318 10:16:36.192646 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkntc\" (UniqueName: \"kubernetes.io/projected/05094271-f491-4119-a9db-88b7fe4f7f3c-kube-api-access-gkntc\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.194794 master-0 kubenswrapper[30420]: I0318 10:16:36.194740 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-oauth-config\") pod \"console-74bbfbc495-9qrz2\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.266681 master-0 kubenswrapper[30420]: I0318 10:16:36.266624 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:36.585911 master-0 kubenswrapper[30420]: I0318 10:16:36.585701 30420 scope.go:117] "RemoveContainer" containerID="7ca73c96270bb01e4b2a501f5fca8a82d6d3109e114172103ea987822829d77c" Mar 18 10:16:36.605530 master-0 kubenswrapper[30420]: I0318 10:16:36.605451 30420 scope.go:117] "RemoveContainer" containerID="66dba26b707d8a7ef9a56c2e052eb81cdb6a21e228ccc4ca178ec7f65804ffae" Mar 18 10:16:36.687119 master-0 kubenswrapper[30420]: I0318 10:16:36.687082 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74bbfbc495-9qrz2"] Mar 18 10:16:37.060591 master-0 kubenswrapper[30420]: I0318 10:16:37.060435 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74bbfbc495-9qrz2" event={"ID":"05094271-f491-4119-a9db-88b7fe4f7f3c","Type":"ContainerStarted","Data":"2a9332c92af92ef3b2ca251ca5e4141c5440002945b35af8ac6d12aad5abf66b"} Mar 18 10:16:41.797578 master-0 kubenswrapper[30420]: I0318 10:16:41.797494 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm"] Mar 18 10:16:41.820549 master-0 kubenswrapper[30420]: I0318 10:16:41.820498 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7ff9bc57fc-q5plp"] Mar 18 10:16:41.823357 master-0 kubenswrapper[30420]: I0318 10:16:41.821700 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:41.834181 master-0 kubenswrapper[30420]: I0318 10:16:41.833305 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 10:16:41.854708 master-0 kubenswrapper[30420]: I0318 10:16:41.854640 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7ff9bc57fc-q5plp"] Mar 18 10:16:41.969121 master-0 kubenswrapper[30420]: I0318 10:16:41.968987 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-service-ca\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:41.969121 master-0 kubenswrapper[30420]: I0318 10:16:41.969093 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbnw\" (UniqueName: \"kubernetes.io/projected/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-kube-api-access-lwbnw\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:41.969372 master-0 kubenswrapper[30420]: I0318 10:16:41.969159 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-oauth-serving-cert\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:41.969372 master-0 kubenswrapper[30420]: I0318 10:16:41.969185 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-oauth-config\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:41.969372 master-0 kubenswrapper[30420]: I0318 10:16:41.969246 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-serving-cert\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:41.969933 master-0 kubenswrapper[30420]: I0318 10:16:41.969382 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-config\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:41.970373 master-0 kubenswrapper[30420]: I0318 10:16:41.970342 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-trusted-ca-bundle\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.074583 master-0 kubenswrapper[30420]: I0318 10:16:42.074497 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwbnw\" (UniqueName: \"kubernetes.io/projected/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-kube-api-access-lwbnw\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.074876 master-0 kubenswrapper[30420]: I0318 10:16:42.074778 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-oauth-serving-cert\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.074876 master-0 kubenswrapper[30420]: I0318 10:16:42.074816 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-oauth-config\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.075029 master-0 kubenswrapper[30420]: I0318 10:16:42.074944 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-serving-cert\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.075029 master-0 kubenswrapper[30420]: I0318 10:16:42.075028 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-config\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.076261 master-0 kubenswrapper[30420]: I0318 10:16:42.075706 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-trusted-ca-bundle\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.076261 master-0 kubenswrapper[30420]: I0318 10:16:42.075936 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-service-ca\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.076261 master-0 kubenswrapper[30420]: I0318 10:16:42.076227 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-config\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.077354 master-0 kubenswrapper[30420]: I0318 10:16:42.077323 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-trusted-ca-bundle\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.078169 master-0 kubenswrapper[30420]: I0318 10:16:42.078138 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-service-ca\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.078976 master-0 kubenswrapper[30420]: I0318 10:16:42.078909 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-oauth-config\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.079386 master-0 kubenswrapper[30420]: I0318 10:16:42.079331 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-oauth-serving-cert\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.080687 master-0 kubenswrapper[30420]: I0318 10:16:42.080649 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-serving-cert\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.095646 master-0 kubenswrapper[30420]: I0318 10:16:42.095582 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwbnw\" (UniqueName: \"kubernetes.io/projected/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-kube-api-access-lwbnw\") pod \"console-7ff9bc57fc-q5plp\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.151964 master-0 kubenswrapper[30420]: I0318 10:16:42.151898 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:42.686296 master-0 kubenswrapper[30420]: I0318 10:16:42.686114 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7ff9bc57fc-q5plp"] Mar 18 10:16:43.108413 master-0 kubenswrapper[30420]: I0318 10:16:43.108339 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7ff9bc57fc-q5plp" event={"ID":"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985","Type":"ContainerStarted","Data":"854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468"} Mar 18 10:16:43.108413 master-0 kubenswrapper[30420]: I0318 10:16:43.108412 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7ff9bc57fc-q5plp" event={"ID":"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985","Type":"ContainerStarted","Data":"3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971"} Mar 18 10:16:43.113407 master-0 kubenswrapper[30420]: I0318 10:16:43.112980 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74bbfbc495-9qrz2" event={"ID":"05094271-f491-4119-a9db-88b7fe4f7f3c","Type":"ContainerStarted","Data":"36d63db8f3c986cdfcd87575d271d8cb4ae85be80326c8340b5c3145f2f22ce5"} Mar 18 10:16:43.136918 master-0 kubenswrapper[30420]: I0318 10:16:43.136786 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7ff9bc57fc-q5plp" podStartSLOduration=2.136760376 podStartE2EDuration="2.136760376s" podCreationTimestamp="2026-03-18 10:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:16:43.136065258 +0000 UTC m=+367.188811187" watchObservedRunningTime="2026-03-18 10:16:43.136760376 +0000 UTC m=+367.189506305" Mar 18 10:16:46.117579 master-0 kubenswrapper[30420]: I0318 10:16:46.117491 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74bbfbc495-9qrz2" podStartSLOduration=5.618415347 podStartE2EDuration="11.11746896s" podCreationTimestamp="2026-03-18 10:16:35 +0000 UTC" firstStartedPulling="2026-03-18 10:16:36.698756534 +0000 UTC m=+360.751502463" lastFinishedPulling="2026-03-18 10:16:42.197810147 +0000 UTC m=+366.250556076" observedRunningTime="2026-03-18 10:16:43.169179097 +0000 UTC m=+367.221925026" watchObservedRunningTime="2026-03-18 10:16:46.11746896 +0000 UTC m=+370.170214889" Mar 18 10:16:46.121564 master-0 kubenswrapper[30420]: I0318 10:16:46.121507 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 18 10:16:46.122904 master-0 kubenswrapper[30420]: I0318 10:16:46.122867 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.131564 master-0 kubenswrapper[30420]: I0318 10:16:46.127059 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-76rsr" Mar 18 10:16:46.131564 master-0 kubenswrapper[30420]: I0318 10:16:46.127804 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 10:16:46.136580 master-0 kubenswrapper[30420]: I0318 10:16:46.136539 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 18 10:16:46.267628 master-0 kubenswrapper[30420]: I0318 10:16:46.267560 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:46.268067 master-0 kubenswrapper[30420]: I0318 10:16:46.268006 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:16:46.268893 master-0 kubenswrapper[30420]: I0318 10:16:46.268840 30420 patch_prober.go:28] interesting pod/console-74bbfbc495-9qrz2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.106:8443/health\": dial tcp 10.128.0.106:8443: connect: connection refused" start-of-body= Mar 18 10:16:46.268983 master-0 kubenswrapper[30420]: I0318 10:16:46.268906 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-74bbfbc495-9qrz2" podUID="05094271-f491-4119-a9db-88b7fe4f7f3c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.106:8443/health\": dial tcp 10.128.0.106:8443: connect: connection refused" Mar 18 10:16:46.271667 master-0 kubenswrapper[30420]: I0318 10:16:46.271508 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.271667 master-0 kubenswrapper[30420]: I0318 10:16:46.271640 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-var-lock\") pod \"installer-5-master-0\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.272227 master-0 kubenswrapper[30420]: I0318 10:16:46.271695 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.373809 master-0 kubenswrapper[30420]: I0318 10:16:46.373667 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.374512 master-0 kubenswrapper[30420]: I0318 10:16:46.374464 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.374570 master-0 kubenswrapper[30420]: I0318 10:16:46.374503 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-var-lock\") pod \"installer-5-master-0\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.374570 master-0 kubenswrapper[30420]: I0318 10:16:46.374468 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-var-lock\") pod \"installer-5-master-0\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.374634 master-0 kubenswrapper[30420]: I0318 10:16:46.374599 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.392305 master-0 kubenswrapper[30420]: I0318 10:16:46.392215 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.458398 master-0 kubenswrapper[30420]: I0318 10:16:46.458165 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:16:46.870570 master-0 kubenswrapper[30420]: I0318 10:16:46.870509 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 18 10:16:46.881163 master-0 kubenswrapper[30420]: W0318 10:16:46.881115 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2be13c7e_ab8c_43a4_ad8e_4ef8fd233348.slice/crio-70934370d695eca554e46b1bb0b2b8cc28acb5193a15eb7f9ae3352d31d135b9 WatchSource:0}: Error finding container 70934370d695eca554e46b1bb0b2b8cc28acb5193a15eb7f9ae3352d31d135b9: Status 404 returned error can't find the container with id 70934370d695eca554e46b1bb0b2b8cc28acb5193a15eb7f9ae3352d31d135b9 Mar 18 10:16:47.169090 master-0 kubenswrapper[30420]: I0318 10:16:47.168926 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348","Type":"ContainerStarted","Data":"70934370d695eca554e46b1bb0b2b8cc28acb5193a15eb7f9ae3352d31d135b9"} Mar 18 10:16:48.196921 master-0 kubenswrapper[30420]: I0318 10:16:48.196855 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348","Type":"ContainerStarted","Data":"5ec20a8c23e21367e9d103100e2a4bdf8b14e279057d3c18d3ce728c07d6f81f"} Mar 18 10:16:48.222413 master-0 kubenswrapper[30420]: I0318 10:16:48.222266 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=2.222229551 podStartE2EDuration="2.222229551s" podCreationTimestamp="2026-03-18 10:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:16:48.212709103 +0000 UTC m=+372.265455042" watchObservedRunningTime="2026-03-18 10:16:48.222229551 +0000 UTC m=+372.274975520" Mar 18 10:16:50.740908 master-0 kubenswrapper[30420]: I0318 10:16:50.740851 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-74bbfbc495-9qrz2"] Mar 18 10:16:50.777614 master-0 kubenswrapper[30420]: I0318 10:16:50.777552 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57d6b5b44-hc2hr"] Mar 18 10:16:50.779169 master-0 kubenswrapper[30420]: I0318 10:16:50.779136 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.808721 master-0 kubenswrapper[30420]: I0318 10:16:50.808672 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57d6b5b44-hc2hr"] Mar 18 10:16:50.853095 master-0 kubenswrapper[30420]: I0318 10:16:50.853057 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-console-config\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.853348 master-0 kubenswrapper[30420]: I0318 10:16:50.853329 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-service-ca\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.853461 master-0 kubenswrapper[30420]: I0318 10:16:50.853445 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-oauth-config\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.853556 master-0 kubenswrapper[30420]: I0318 10:16:50.853544 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcfc\" (UniqueName: \"kubernetes.io/projected/999213fe-0b3a-4231-80be-6cffc474d94d-kube-api-access-pxcfc\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.853656 master-0 kubenswrapper[30420]: I0318 10:16:50.853644 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-serving-cert\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.853756 master-0 kubenswrapper[30420]: I0318 10:16:50.853743 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-oauth-serving-cert\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.853850 master-0 kubenswrapper[30420]: I0318 10:16:50.853838 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-trusted-ca-bundle\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.954925 master-0 kubenswrapper[30420]: I0318 10:16:50.954844 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-console-config\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.954925 master-0 kubenswrapper[30420]: I0318 10:16:50.954917 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-service-ca\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.955219 master-0 kubenswrapper[30420]: I0318 10:16:50.954972 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-oauth-config\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.955219 master-0 kubenswrapper[30420]: I0318 10:16:50.955016 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxcfc\" (UniqueName: \"kubernetes.io/projected/999213fe-0b3a-4231-80be-6cffc474d94d-kube-api-access-pxcfc\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.955219 master-0 kubenswrapper[30420]: I0318 10:16:50.955064 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-serving-cert\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.955219 master-0 kubenswrapper[30420]: I0318 10:16:50.955113 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-oauth-serving-cert\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.955219 master-0 kubenswrapper[30420]: I0318 10:16:50.955132 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-trusted-ca-bundle\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.956416 master-0 kubenswrapper[30420]: I0318 10:16:50.956391 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-service-ca\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.956541 master-0 kubenswrapper[30420]: I0318 10:16:50.956433 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-console-config\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.956640 master-0 kubenswrapper[30420]: I0318 10:16:50.956532 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-trusted-ca-bundle\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.956718 master-0 kubenswrapper[30420]: I0318 10:16:50.956669 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-oauth-serving-cert\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.958666 master-0 kubenswrapper[30420]: I0318 10:16:50.958628 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-serving-cert\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.959408 master-0 kubenswrapper[30420]: I0318 10:16:50.959307 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-oauth-config\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:50.973969 master-0 kubenswrapper[30420]: I0318 10:16:50.973873 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxcfc\" (UniqueName: \"kubernetes.io/projected/999213fe-0b3a-4231-80be-6cffc474d94d-kube-api-access-pxcfc\") pod \"console-57d6b5b44-hc2hr\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:51.116294 master-0 kubenswrapper[30420]: I0318 10:16:51.116171 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:16:51.527866 master-0 kubenswrapper[30420]: W0318 10:16:51.527796 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod999213fe_0b3a_4231_80be_6cffc474d94d.slice/crio-3748813bf850f4eeca8690362bb861aedda70485194a95ebcb39394a19df1091 WatchSource:0}: Error finding container 3748813bf850f4eeca8690362bb861aedda70485194a95ebcb39394a19df1091: Status 404 returned error can't find the container with id 3748813bf850f4eeca8690362bb861aedda70485194a95ebcb39394a19df1091 Mar 18 10:16:51.532517 master-0 kubenswrapper[30420]: I0318 10:16:51.532369 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57d6b5b44-hc2hr"] Mar 18 10:16:52.153006 master-0 kubenswrapper[30420]: I0318 10:16:52.152961 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:52.153692 master-0 kubenswrapper[30420]: I0318 10:16:52.153677 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:16:52.154625 master-0 kubenswrapper[30420]: I0318 10:16:52.154547 30420 patch_prober.go:28] interesting pod/console-7ff9bc57fc-q5plp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Mar 18 10:16:52.154726 master-0 kubenswrapper[30420]: I0318 10:16:52.154611 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7ff9bc57fc-q5plp" podUID="6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Mar 18 10:16:52.223686 master-0 kubenswrapper[30420]: I0318 10:16:52.223442 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57d6b5b44-hc2hr" event={"ID":"999213fe-0b3a-4231-80be-6cffc474d94d","Type":"ContainerStarted","Data":"e6e9aa9d5f7efe6d00474f60585df43e85ee0389c5677d30c1078b03b74a708a"} Mar 18 10:16:52.223686 master-0 kubenswrapper[30420]: I0318 10:16:52.223478 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57d6b5b44-hc2hr" event={"ID":"999213fe-0b3a-4231-80be-6cffc474d94d","Type":"ContainerStarted","Data":"3748813bf850f4eeca8690362bb861aedda70485194a95ebcb39394a19df1091"} Mar 18 10:16:52.254106 master-0 kubenswrapper[30420]: I0318 10:16:52.254019 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57d6b5b44-hc2hr" podStartSLOduration=2.253988428 podStartE2EDuration="2.253988428s" podCreationTimestamp="2026-03-18 10:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:16:52.252330717 +0000 UTC m=+376.305076646" watchObservedRunningTime="2026-03-18 10:16:52.253988428 +0000 UTC m=+376.306734357" Mar 18 10:17:01.116580 master-0 kubenswrapper[30420]: I0318 10:17:01.116523 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:17:01.116580 master-0 kubenswrapper[30420]: I0318 10:17:01.116583 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:17:01.118361 master-0 kubenswrapper[30420]: I0318 10:17:01.118321 30420 patch_prober.go:28] interesting pod/console-57d6b5b44-hc2hr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" start-of-body= Mar 18 10:17:01.118429 master-0 kubenswrapper[30420]: I0318 10:17:01.118368 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-57d6b5b44-hc2hr" podUID="999213fe-0b3a-4231-80be-6cffc474d94d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" Mar 18 10:17:02.154881 master-0 kubenswrapper[30420]: I0318 10:17:02.152733 30420 patch_prober.go:28] interesting pod/console-7ff9bc57fc-q5plp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" start-of-body= Mar 18 10:17:02.154881 master-0 kubenswrapper[30420]: I0318 10:17:02.152817 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7ff9bc57fc-q5plp" podUID="6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" containerName="console" probeResult="failure" output="Get \"https://10.128.0.107:8443/health\": dial tcp 10.128.0.107:8443: connect: connection refused" Mar 18 10:17:06.840124 master-0 kubenswrapper[30420]: I0318 10:17:06.839893 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" podUID="edc60dd5-333f-44bc-bb10-f10673c59074" containerName="oauth-openshift" containerID="cri-o://3c865a915fa70c9713900bc74ae8ce02817ffa929bdcfc9a047b9dd914cf416e" gracePeriod=15 Mar 18 10:17:07.340996 master-0 kubenswrapper[30420]: I0318 10:17:07.340511 30420 generic.go:334] "Generic (PLEG): container finished" podID="edc60dd5-333f-44bc-bb10-f10673c59074" containerID="3c865a915fa70c9713900bc74ae8ce02817ffa929bdcfc9a047b9dd914cf416e" exitCode=0 Mar 18 10:17:07.340996 master-0 kubenswrapper[30420]: I0318 10:17:07.340595 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" event={"ID":"edc60dd5-333f-44bc-bb10-f10673c59074","Type":"ContainerDied","Data":"3c865a915fa70c9713900bc74ae8ce02817ffa929bdcfc9a047b9dd914cf416e"} Mar 18 10:17:09.399900 master-0 kubenswrapper[30420]: I0318 10:17:09.399769 30420 patch_prober.go:28] interesting pod/oauth-openshift-7c7b74cb9b-hkblm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.96:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 10:17:09.400565 master-0 kubenswrapper[30420]: I0318 10:17:09.399899 30420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" podUID="edc60dd5-333f-44bc-bb10-f10673c59074" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.96:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 10:17:09.696154 master-0 kubenswrapper[30420]: I0318 10:17:09.691898 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:17:09.795709 master-0 kubenswrapper[30420]: I0318 10:17:09.795618 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-provider-selection\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.795994 master-0 kubenswrapper[30420]: I0318 10:17:09.795750 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-login\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.795994 master-0 kubenswrapper[30420]: I0318 10:17:09.795891 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-cliconfig\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.795994 master-0 kubenswrapper[30420]: I0318 10:17:09.795949 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-service-ca\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796166 master-0 kubenswrapper[30420]: I0318 10:17:09.796010 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-ocp-branding-template\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796166 master-0 kubenswrapper[30420]: I0318 10:17:09.796063 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-trusted-ca-bundle\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796166 master-0 kubenswrapper[30420]: I0318 10:17:09.796111 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/edc60dd5-333f-44bc-bb10-f10673c59074-audit-dir\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796298 master-0 kubenswrapper[30420]: I0318 10:17:09.796203 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-session\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796298 master-0 kubenswrapper[30420]: I0318 10:17:09.796264 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms2hq\" (UniqueName: \"kubernetes.io/projected/edc60dd5-333f-44bc-bb10-f10673c59074-kube-api-access-ms2hq\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796298 master-0 kubenswrapper[30420]: I0318 10:17:09.796163 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc60dd5-333f-44bc-bb10-f10673c59074-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:17:09.796430 master-0 kubenswrapper[30420]: I0318 10:17:09.796318 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-error\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796430 master-0 kubenswrapper[30420]: I0318 10:17:09.796398 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-audit-policies\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796535 master-0 kubenswrapper[30420]: I0318 10:17:09.796462 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-router-certs\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796535 master-0 kubenswrapper[30420]: I0318 10:17:09.796513 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-serving-cert\") pod \"edc60dd5-333f-44bc-bb10-f10673c59074\" (UID: \"edc60dd5-333f-44bc-bb10-f10673c59074\") " Mar 18 10:17:09.796713 master-0 kubenswrapper[30420]: I0318 10:17:09.796665 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:09.796940 master-0 kubenswrapper[30420]: I0318 10:17:09.796816 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:09.797122 master-0 kubenswrapper[30420]: I0318 10:17:09.797071 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.797231 master-0 kubenswrapper[30420]: I0318 10:17:09.797131 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.797231 master-0 kubenswrapper[30420]: I0318 10:17:09.797134 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:09.797231 master-0 kubenswrapper[30420]: I0318 10:17:09.797167 30420 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/edc60dd5-333f-44bc-bb10-f10673c59074-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.797944 master-0 kubenswrapper[30420]: I0318 10:17:09.797905 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:09.799654 master-0 kubenswrapper[30420]: I0318 10:17:09.799601 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:09.799780 master-0 kubenswrapper[30420]: I0318 10:17:09.799742 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:09.800623 master-0 kubenswrapper[30420]: I0318 10:17:09.800582 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:09.800742 master-0 kubenswrapper[30420]: I0318 10:17:09.800664 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:09.801329 master-0 kubenswrapper[30420]: I0318 10:17:09.801270 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:09.801768 master-0 kubenswrapper[30420]: I0318 10:17:09.801739 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc60dd5-333f-44bc-bb10-f10673c59074-kube-api-access-ms2hq" (OuterVolumeSpecName: "kube-api-access-ms2hq") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "kube-api-access-ms2hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:17:09.801905 master-0 kubenswrapper[30420]: I0318 10:17:09.801889 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:09.802087 master-0 kubenswrapper[30420]: I0318 10:17:09.802029 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "edc60dd5-333f-44bc-bb10-f10673c59074" (UID: "edc60dd5-333f-44bc-bb10-f10673c59074"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:09.899034 master-0 kubenswrapper[30420]: I0318 10:17:09.898962 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.899034 master-0 kubenswrapper[30420]: I0318 10:17:09.899033 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms2hq\" (UniqueName: \"kubernetes.io/projected/edc60dd5-333f-44bc-bb10-f10673c59074-kube-api-access-ms2hq\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.899307 master-0 kubenswrapper[30420]: I0318 10:17:09.899055 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.899307 master-0 kubenswrapper[30420]: I0318 10:17:09.899078 30420 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.899307 master-0 kubenswrapper[30420]: I0318 10:17:09.899096 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.899307 master-0 kubenswrapper[30420]: I0318 10:17:09.899114 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.899307 master-0 kubenswrapper[30420]: I0318 10:17:09.899134 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.899307 master-0 kubenswrapper[30420]: I0318 10:17:09.899152 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.899307 master-0 kubenswrapper[30420]: I0318 10:17:09.899167 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:09.899307 master-0 kubenswrapper[30420]: I0318 10:17:09.899186 30420 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/edc60dd5-333f-44bc-bb10-f10673c59074-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:10.364271 master-0 kubenswrapper[30420]: I0318 10:17:10.364201 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" event={"ID":"edc60dd5-333f-44bc-bb10-f10673c59074","Type":"ContainerDied","Data":"7104e089085ccb21356703553eb0de595cd957d72f937c0fc9cc0a6e933d1d6c"} Mar 18 10:17:10.364271 master-0 kubenswrapper[30420]: I0318 10:17:10.364274 30420 scope.go:117] "RemoveContainer" containerID="3c865a915fa70c9713900bc74ae8ce02817ffa929bdcfc9a047b9dd914cf416e" Mar 18 10:17:10.364648 master-0 kubenswrapper[30420]: I0318 10:17:10.364393 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm" Mar 18 10:17:10.420618 master-0 kubenswrapper[30420]: I0318 10:17:10.420345 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7"] Mar 18 10:17:10.421162 master-0 kubenswrapper[30420]: E0318 10:17:10.420984 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc60dd5-333f-44bc-bb10-f10673c59074" containerName="oauth-openshift" Mar 18 10:17:10.421162 master-0 kubenswrapper[30420]: I0318 10:17:10.421000 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc60dd5-333f-44bc-bb10-f10673c59074" containerName="oauth-openshift" Mar 18 10:17:10.421162 master-0 kubenswrapper[30420]: I0318 10:17:10.421146 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc60dd5-333f-44bc-bb10-f10673c59074" containerName="oauth-openshift" Mar 18 10:17:10.422766 master-0 kubenswrapper[30420]: I0318 10:17:10.421921 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" Mar 18 10:17:10.434921 master-0 kubenswrapper[30420]: I0318 10:17:10.433901 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 10:17:10.434921 master-0 kubenswrapper[30420]: I0318 10:17:10.434416 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 10:17:10.484913 master-0 kubenswrapper[30420]: I0318 10:17:10.484805 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7"] Mar 18 10:17:10.547854 master-0 kubenswrapper[30420]: I0318 10:17:10.527027 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/2c86cb82-ca1c-4237-b040-c8a7da74b73c-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-hpcv7\" (UID: \"2c86cb82-ca1c-4237-b040-c8a7da74b73c\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" Mar 18 10:17:10.547854 master-0 kubenswrapper[30420]: I0318 10:17:10.527100 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c86cb82-ca1c-4237-b040-c8a7da74b73c-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-hpcv7\" (UID: \"2c86cb82-ca1c-4237-b040-c8a7da74b73c\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" Mar 18 10:17:10.556038 master-0 kubenswrapper[30420]: I0318 10:17:10.550815 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-754d5d5989-q9cdp"] Mar 18 10:17:10.559104 master-0 kubenswrapper[30420]: I0318 10:17:10.559052 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.563947 master-0 kubenswrapper[30420]: I0318 10:17:10.563056 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 10:17:10.563947 master-0 kubenswrapper[30420]: I0318 10:17:10.563279 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 10:17:10.563947 master-0 kubenswrapper[30420]: I0318 10:17:10.563528 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 10:17:10.563947 master-0 kubenswrapper[30420]: I0318 10:17:10.563794 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 10:17:10.564187 master-0 kubenswrapper[30420]: I0318 10:17:10.563991 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 10:17:10.570117 master-0 kubenswrapper[30420]: I0318 10:17:10.565244 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 10:17:10.577053 master-0 kubenswrapper[30420]: I0318 10:17:10.572417 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-dpzpw" Mar 18 10:17:10.577053 master-0 kubenswrapper[30420]: I0318 10:17:10.572794 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 10:17:10.577053 master-0 kubenswrapper[30420]: I0318 10:17:10.572996 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 10:17:10.577053 master-0 kubenswrapper[30420]: I0318 10:17:10.576210 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 10:17:10.577053 master-0 kubenswrapper[30420]: I0318 10:17:10.576870 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 10:17:10.577053 master-0 kubenswrapper[30420]: I0318 10:17:10.576887 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 10:17:10.585074 master-0 kubenswrapper[30420]: I0318 10:17:10.584788 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 10:17:10.585305 master-0 kubenswrapper[30420]: I0318 10:17:10.584644 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 10:17:10.589960 master-0 kubenswrapper[30420]: I0318 10:17:10.589754 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-754d5d5989-q9cdp"] Mar 18 10:17:10.617275 master-0 kubenswrapper[30420]: I0318 10:17:10.617073 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm"] Mar 18 10:17:10.625954 master-0 kubenswrapper[30420]: I0318 10:17:10.625902 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-7c7b74cb9b-hkblm"] Mar 18 10:17:10.629233 master-0 kubenswrapper[30420]: I0318 10:17:10.629196 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-session\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629364 master-0 kubenswrapper[30420]: I0318 10:17:10.629247 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-serving-cert\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629364 master-0 kubenswrapper[30420]: I0318 10:17:10.629333 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/2c86cb82-ca1c-4237-b040-c8a7da74b73c-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-hpcv7\" (UID: \"2c86cb82-ca1c-4237-b040-c8a7da74b73c\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" Mar 18 10:17:10.629451 master-0 kubenswrapper[30420]: I0318 10:17:10.629373 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-user-template-login\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629451 master-0 kubenswrapper[30420]: I0318 10:17:10.629401 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dba62b67-572b-4250-a7de-1a092edd4c68-audit-dir\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629451 master-0 kubenswrapper[30420]: I0318 10:17:10.629426 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c86cb82-ca1c-4237-b040-c8a7da74b73c-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-hpcv7\" (UID: \"2c86cb82-ca1c-4237-b040-c8a7da74b73c\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" Mar 18 10:17:10.629590 master-0 kubenswrapper[30420]: I0318 10:17:10.629488 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-audit-policies\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629590 master-0 kubenswrapper[30420]: I0318 10:17:10.629529 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-service-ca\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629590 master-0 kubenswrapper[30420]: I0318 10:17:10.629565 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629753 master-0 kubenswrapper[30420]: I0318 10:17:10.629598 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-cliconfig\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629753 master-0 kubenswrapper[30420]: I0318 10:17:10.629656 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629753 master-0 kubenswrapper[30420]: I0318 10:17:10.629686 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.629753 master-0 kubenswrapper[30420]: I0318 10:17:10.629711 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-router-certs\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.630006 master-0 kubenswrapper[30420]: I0318 10:17:10.629766 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhmqw\" (UniqueName: \"kubernetes.io/projected/dba62b67-572b-4250-a7de-1a092edd4c68-kube-api-access-jhmqw\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.630006 master-0 kubenswrapper[30420]: I0318 10:17:10.629818 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-user-template-error\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.630228 master-0 kubenswrapper[30420]: I0318 10:17:10.630200 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c86cb82-ca1c-4237-b040-c8a7da74b73c-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-hpcv7\" (UID: \"2c86cb82-ca1c-4237-b040-c8a7da74b73c\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" Mar 18 10:17:10.632461 master-0 kubenswrapper[30420]: I0318 10:17:10.632401 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/2c86cb82-ca1c-4237-b040-c8a7da74b73c-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-hpcv7\" (UID: \"2c86cb82-ca1c-4237-b040-c8a7da74b73c\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" Mar 18 10:17:10.731298 master-0 kubenswrapper[30420]: I0318 10:17:10.731146 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhmqw\" (UniqueName: \"kubernetes.io/projected/dba62b67-572b-4250-a7de-1a092edd4c68-kube-api-access-jhmqw\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731298 master-0 kubenswrapper[30420]: I0318 10:17:10.731234 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-user-template-error\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731298 master-0 kubenswrapper[30420]: I0318 10:17:10.731269 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-session\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731298 master-0 kubenswrapper[30420]: I0318 10:17:10.731303 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-serving-cert\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731686 master-0 kubenswrapper[30420]: I0318 10:17:10.731340 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-user-template-login\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731686 master-0 kubenswrapper[30420]: I0318 10:17:10.731370 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dba62b67-572b-4250-a7de-1a092edd4c68-audit-dir\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731686 master-0 kubenswrapper[30420]: I0318 10:17:10.731411 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-audit-policies\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731686 master-0 kubenswrapper[30420]: I0318 10:17:10.731440 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-service-ca\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731686 master-0 kubenswrapper[30420]: I0318 10:17:10.731485 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731686 master-0 kubenswrapper[30420]: I0318 10:17:10.731516 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-cliconfig\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731686 master-0 kubenswrapper[30420]: I0318 10:17:10.731545 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731686 master-0 kubenswrapper[30420]: I0318 10:17:10.731570 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.731686 master-0 kubenswrapper[30420]: I0318 10:17:10.731594 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-router-certs\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.732330 master-0 kubenswrapper[30420]: I0318 10:17:10.732177 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dba62b67-572b-4250-a7de-1a092edd4c68-audit-dir\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.733513 master-0 kubenswrapper[30420]: I0318 10:17:10.733479 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-audit-policies\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.733595 master-0 kubenswrapper[30420]: I0318 10:17:10.733570 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-service-ca\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.733706 master-0 kubenswrapper[30420]: I0318 10:17:10.733676 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.734258 master-0 kubenswrapper[30420]: I0318 10:17:10.734227 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-cliconfig\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.735615 master-0 kubenswrapper[30420]: I0318 10:17:10.735585 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-router-certs\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.736400 master-0 kubenswrapper[30420]: I0318 10:17:10.736375 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.736626 master-0 kubenswrapper[30420]: I0318 10:17:10.736603 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-user-template-login\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.737316 master-0 kubenswrapper[30420]: I0318 10:17:10.737281 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.737316 master-0 kubenswrapper[30420]: I0318 10:17:10.737285 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-session\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.737863 master-0 kubenswrapper[30420]: I0318 10:17:10.737802 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-user-template-error\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.738426 master-0 kubenswrapper[30420]: I0318 10:17:10.738386 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dba62b67-572b-4250-a7de-1a092edd4c68-v4-0-config-system-serving-cert\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.750012 master-0 kubenswrapper[30420]: I0318 10:17:10.749963 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhmqw\" (UniqueName: \"kubernetes.io/projected/dba62b67-572b-4250-a7de-1a092edd4c68-kube-api-access-jhmqw\") pod \"oauth-openshift-754d5d5989-q9cdp\" (UID: \"dba62b67-572b-4250-a7de-1a092edd4c68\") " pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:10.848418 master-0 kubenswrapper[30420]: I0318 10:17:10.848358 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" Mar 18 10:17:10.893885 master-0 kubenswrapper[30420]: I0318 10:17:10.893803 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:11.116861 master-0 kubenswrapper[30420]: I0318 10:17:11.116785 30420 patch_prober.go:28] interesting pod/console-57d6b5b44-hc2hr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" start-of-body= Mar 18 10:17:11.116994 master-0 kubenswrapper[30420]: I0318 10:17:11.116853 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-57d6b5b44-hc2hr" podUID="999213fe-0b3a-4231-80be-6cffc474d94d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" Mar 18 10:17:11.125288 master-0 kubenswrapper[30420]: I0318 10:17:11.125233 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7"] Mar 18 10:17:11.250090 master-0 kubenswrapper[30420]: I0318 10:17:11.247144 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7ff9bc57fc-q5plp"] Mar 18 10:17:11.281455 master-0 kubenswrapper[30420]: I0318 10:17:11.281393 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-575b5dddfb-mj9qv"] Mar 18 10:17:11.285350 master-0 kubenswrapper[30420]: I0318 10:17:11.284361 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.295591 master-0 kubenswrapper[30420]: I0318 10:17:11.295522 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-575b5dddfb-mj9qv"] Mar 18 10:17:11.350019 master-0 kubenswrapper[30420]: I0318 10:17:11.349957 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfqhj\" (UniqueName: \"kubernetes.io/projected/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-kube-api-access-kfqhj\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.350019 master-0 kubenswrapper[30420]: I0318 10:17:11.350002 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-oauth-serving-cert\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.350286 master-0 kubenswrapper[30420]: I0318 10:17:11.350041 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-serving-cert\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.350286 master-0 kubenswrapper[30420]: I0318 10:17:11.350071 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-trusted-ca-bundle\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.350286 master-0 kubenswrapper[30420]: I0318 10:17:11.350099 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-config\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.350286 master-0 kubenswrapper[30420]: I0318 10:17:11.350144 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-service-ca\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.350286 master-0 kubenswrapper[30420]: I0318 10:17:11.350168 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-oauth-config\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.355947 master-0 kubenswrapper[30420]: I0318 10:17:11.354966 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-754d5d5989-q9cdp"] Mar 18 10:17:11.376685 master-0 kubenswrapper[30420]: I0318 10:17:11.376627 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-wg4k5" event={"ID":"5c77e26d-a46a-4552-88b8-2c8e3473437e","Type":"ContainerStarted","Data":"72c20fca05436f5fe40f49fecfaea482490271952e1ff0c8a2b6a9304ee028e9"} Mar 18 10:17:11.377264 master-0 kubenswrapper[30420]: I0318 10:17:11.377205 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-66b8ffb895-wg4k5" Mar 18 10:17:11.380264 master-0 kubenswrapper[30420]: I0318 10:17:11.380217 30420 patch_prober.go:28] interesting pod/downloads-66b8ffb895-wg4k5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" start-of-body= Mar 18 10:17:11.380442 master-0 kubenswrapper[30420]: I0318 10:17:11.380271 30420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-wg4k5" podUID="5c77e26d-a46a-4552-88b8-2c8e3473437e" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" Mar 18 10:17:11.380870 master-0 kubenswrapper[30420]: I0318 10:17:11.380839 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" event={"ID":"dba62b67-572b-4250-a7de-1a092edd4c68","Type":"ContainerStarted","Data":"f353c1293d1793e84831b3c5e485418287f6fe7ccb93026ab607997ecd889879"} Mar 18 10:17:11.387431 master-0 kubenswrapper[30420]: I0318 10:17:11.386934 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" event={"ID":"2c86cb82-ca1c-4237-b040-c8a7da74b73c","Type":"ContainerStarted","Data":"7bd04d2be10266cf8e951a7b659ba6c1c21a133ba5842c4efdefacd21c141a45"} Mar 18 10:17:11.399551 master-0 kubenswrapper[30420]: I0318 10:17:11.399454 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-66b8ffb895-wg4k5" podStartSLOduration=2.377282255 podStartE2EDuration="43.399433253s" podCreationTimestamp="2026-03-18 10:16:28 +0000 UTC" firstStartedPulling="2026-03-18 10:16:29.368141845 +0000 UTC m=+353.420887794" lastFinishedPulling="2026-03-18 10:17:10.390292813 +0000 UTC m=+394.443038792" observedRunningTime="2026-03-18 10:17:11.394475339 +0000 UTC m=+395.447221258" watchObservedRunningTime="2026-03-18 10:17:11.399433253 +0000 UTC m=+395.452179192" Mar 18 10:17:11.452320 master-0 kubenswrapper[30420]: I0318 10:17:11.452225 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfqhj\" (UniqueName: \"kubernetes.io/projected/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-kube-api-access-kfqhj\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.452320 master-0 kubenswrapper[30420]: I0318 10:17:11.452310 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-oauth-serving-cert\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.453326 master-0 kubenswrapper[30420]: I0318 10:17:11.452602 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-serving-cert\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.453326 master-0 kubenswrapper[30420]: I0318 10:17:11.452676 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-trusted-ca-bundle\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.453326 master-0 kubenswrapper[30420]: I0318 10:17:11.452708 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-config\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.453326 master-0 kubenswrapper[30420]: I0318 10:17:11.452753 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-service-ca\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.453326 master-0 kubenswrapper[30420]: I0318 10:17:11.452779 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-oauth-config\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.455648 master-0 kubenswrapper[30420]: I0318 10:17:11.455602 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-oauth-serving-cert\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.455648 master-0 kubenswrapper[30420]: I0318 10:17:11.455632 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-config\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.455914 master-0 kubenswrapper[30420]: I0318 10:17:11.455800 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-service-ca\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.456593 master-0 kubenswrapper[30420]: I0318 10:17:11.456556 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-serving-cert\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.457468 master-0 kubenswrapper[30420]: I0318 10:17:11.457437 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-oauth-config\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.475285 master-0 kubenswrapper[30420]: I0318 10:17:11.475227 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-trusted-ca-bundle\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.481436 master-0 kubenswrapper[30420]: I0318 10:17:11.481395 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfqhj\" (UniqueName: \"kubernetes.io/projected/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-kube-api-access-kfqhj\") pod \"console-575b5dddfb-mj9qv\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:11.614852 master-0 kubenswrapper[30420]: I0318 10:17:11.614790 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:12.663168 master-0 kubenswrapper[30420]: I0318 10:17:12.663042 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc60dd5-333f-44bc-bb10-f10673c59074" path="/var/lib/kubelet/pods/edc60dd5-333f-44bc-bb10-f10673c59074/volumes" Mar 18 10:17:12.669231 master-0 kubenswrapper[30420]: I0318 10:17:12.668959 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" event={"ID":"dba62b67-572b-4250-a7de-1a092edd4c68","Type":"ContainerStarted","Data":"9e061dd42e13c23cd7df149f505cf9b0391626af16f0c20248b814bd49ac988e"} Mar 18 10:17:12.669231 master-0 kubenswrapper[30420]: I0318 10:17:12.669002 30420 patch_prober.go:28] interesting pod/downloads-66b8ffb895-wg4k5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" start-of-body= Mar 18 10:17:12.669231 master-0 kubenswrapper[30420]: I0318 10:17:12.669033 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:12.669231 master-0 kubenswrapper[30420]: I0318 10:17:12.669053 30420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-wg4k5" podUID="5c77e26d-a46a-4552-88b8-2c8e3473437e" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" Mar 18 10:17:12.830365 master-0 kubenswrapper[30420]: I0318 10:17:12.823960 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" podStartSLOduration=31.823938901 podStartE2EDuration="31.823938901s" podCreationTimestamp="2026-03-18 10:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:17:12.815675984 +0000 UTC m=+396.868421923" watchObservedRunningTime="2026-03-18 10:17:12.823938901 +0000 UTC m=+396.876684840" Mar 18 10:17:12.830365 master-0 kubenswrapper[30420]: I0318 10:17:12.825880 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-575b5dddfb-mj9qv"] Mar 18 10:17:12.833118 master-0 kubenswrapper[30420]: I0318 10:17:12.833073 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-754d5d5989-q9cdp" Mar 18 10:17:13.678214 master-0 kubenswrapper[30420]: I0318 10:17:13.678060 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575b5dddfb-mj9qv" event={"ID":"cebc7ed6-93ef-46cc-8f8f-246c479bd68a","Type":"ContainerStarted","Data":"dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74"} Mar 18 10:17:13.678214 master-0 kubenswrapper[30420]: I0318 10:17:13.678101 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575b5dddfb-mj9qv" event={"ID":"cebc7ed6-93ef-46cc-8f8f-246c479bd68a","Type":"ContainerStarted","Data":"26bab77a906fcdde1e299ed503a7b7dbb0a30a002fc00ccf94c5c503777d4cee"} Mar 18 10:17:15.696950 master-0 kubenswrapper[30420]: I0318 10:17:15.696869 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" event={"ID":"2c86cb82-ca1c-4237-b040-c8a7da74b73c","Type":"ContainerStarted","Data":"b8735c7357df0d92ef0745085fb8f46c6f6b6bd11f0a76d8bdd32849399269e8"} Mar 18 10:17:15.785655 master-0 kubenswrapper[30420]: I0318 10:17:15.785544 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-74bbfbc495-9qrz2" podUID="05094271-f491-4119-a9db-88b7fe4f7f3c" containerName="console" containerID="cri-o://36d63db8f3c986cdfcd87575d271d8cb4ae85be80326c8340b5c3145f2f22ce5" gracePeriod=15 Mar 18 10:17:16.383959 master-0 kubenswrapper[30420]: I0318 10:17:16.383851 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-575b5dddfb-mj9qv" podStartSLOduration=5.38379906 podStartE2EDuration="5.38379906s" podCreationTimestamp="2026-03-18 10:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:17:15.073792796 +0000 UTC m=+399.126538765" watchObservedRunningTime="2026-03-18 10:17:16.38379906 +0000 UTC m=+400.436545009" Mar 18 10:17:16.398322 master-0 kubenswrapper[30420]: I0318 10:17:16.398178 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c6b76c555-hpcv7" podStartSLOduration=4.108690372 podStartE2EDuration="7.39814465s" podCreationTimestamp="2026-03-18 10:17:09 +0000 UTC" firstStartedPulling="2026-03-18 10:17:11.127220638 +0000 UTC m=+395.179966567" lastFinishedPulling="2026-03-18 10:17:14.416674876 +0000 UTC m=+398.469420845" observedRunningTime="2026-03-18 10:17:16.379171874 +0000 UTC m=+400.431917843" watchObservedRunningTime="2026-03-18 10:17:16.39814465 +0000 UTC m=+400.450890619" Mar 18 10:17:16.717634 master-0 kubenswrapper[30420]: I0318 10:17:16.717376 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74bbfbc495-9qrz2_05094271-f491-4119-a9db-88b7fe4f7f3c/console/0.log" Mar 18 10:17:16.717634 master-0 kubenswrapper[30420]: I0318 10:17:16.717418 30420 generic.go:334] "Generic (PLEG): container finished" podID="05094271-f491-4119-a9db-88b7fe4f7f3c" containerID="36d63db8f3c986cdfcd87575d271d8cb4ae85be80326c8340b5c3145f2f22ce5" exitCode=2 Mar 18 10:17:16.718274 master-0 kubenswrapper[30420]: I0318 10:17:16.718095 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74bbfbc495-9qrz2" event={"ID":"05094271-f491-4119-a9db-88b7fe4f7f3c","Type":"ContainerDied","Data":"36d63db8f3c986cdfcd87575d271d8cb4ae85be80326c8340b5c3145f2f22ce5"} Mar 18 10:17:16.784318 master-0 kubenswrapper[30420]: I0318 10:17:16.784271 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74bbfbc495-9qrz2_05094271-f491-4119-a9db-88b7fe4f7f3c/console/0.log" Mar 18 10:17:16.784612 master-0 kubenswrapper[30420]: I0318 10:17:16.784357 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:17:16.850377 master-0 kubenswrapper[30420]: I0318 10:17:16.850315 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-console-config\") pod \"05094271-f491-4119-a9db-88b7fe4f7f3c\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " Mar 18 10:17:16.850377 master-0 kubenswrapper[30420]: I0318 10:17:16.850367 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-oauth-serving-cert\") pod \"05094271-f491-4119-a9db-88b7fe4f7f3c\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " Mar 18 10:17:16.850695 master-0 kubenswrapper[30420]: I0318 10:17:16.850470 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-oauth-config\") pod \"05094271-f491-4119-a9db-88b7fe4f7f3c\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " Mar 18 10:17:16.850695 master-0 kubenswrapper[30420]: I0318 10:17:16.850505 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-service-ca\") pod \"05094271-f491-4119-a9db-88b7fe4f7f3c\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " Mar 18 10:17:16.850695 master-0 kubenswrapper[30420]: I0318 10:17:16.850554 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkntc\" (UniqueName: \"kubernetes.io/projected/05094271-f491-4119-a9db-88b7fe4f7f3c-kube-api-access-gkntc\") pod \"05094271-f491-4119-a9db-88b7fe4f7f3c\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " Mar 18 10:17:16.850695 master-0 kubenswrapper[30420]: I0318 10:17:16.850579 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-serving-cert\") pod \"05094271-f491-4119-a9db-88b7fe4f7f3c\" (UID: \"05094271-f491-4119-a9db-88b7fe4f7f3c\") " Mar 18 10:17:16.850944 master-0 kubenswrapper[30420]: I0318 10:17:16.850857 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-console-config" (OuterVolumeSpecName: "console-config") pod "05094271-f491-4119-a9db-88b7fe4f7f3c" (UID: "05094271-f491-4119-a9db-88b7fe4f7f3c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:16.851224 master-0 kubenswrapper[30420]: I0318 10:17:16.851172 30420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:16.851288 master-0 kubenswrapper[30420]: I0318 10:17:16.851208 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "05094271-f491-4119-a9db-88b7fe4f7f3c" (UID: "05094271-f491-4119-a9db-88b7fe4f7f3c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:16.851339 master-0 kubenswrapper[30420]: I0318 10:17:16.851290 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-service-ca" (OuterVolumeSpecName: "service-ca") pod "05094271-f491-4119-a9db-88b7fe4f7f3c" (UID: "05094271-f491-4119-a9db-88b7fe4f7f3c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:16.853804 master-0 kubenswrapper[30420]: I0318 10:17:16.853773 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "05094271-f491-4119-a9db-88b7fe4f7f3c" (UID: "05094271-f491-4119-a9db-88b7fe4f7f3c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:16.854755 master-0 kubenswrapper[30420]: I0318 10:17:16.854693 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05094271-f491-4119-a9db-88b7fe4f7f3c-kube-api-access-gkntc" (OuterVolumeSpecName: "kube-api-access-gkntc") pod "05094271-f491-4119-a9db-88b7fe4f7f3c" (UID: "05094271-f491-4119-a9db-88b7fe4f7f3c"). InnerVolumeSpecName "kube-api-access-gkntc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:17:16.862435 master-0 kubenswrapper[30420]: I0318 10:17:16.862375 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "05094271-f491-4119-a9db-88b7fe4f7f3c" (UID: "05094271-f491-4119-a9db-88b7fe4f7f3c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:16.952046 master-0 kubenswrapper[30420]: I0318 10:17:16.951966 30420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:16.952046 master-0 kubenswrapper[30420]: I0318 10:17:16.952036 30420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:16.952046 master-0 kubenswrapper[30420]: I0318 10:17:16.952048 30420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05094271-f491-4119-a9db-88b7fe4f7f3c-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:16.952046 master-0 kubenswrapper[30420]: I0318 10:17:16.952058 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkntc\" (UniqueName: \"kubernetes.io/projected/05094271-f491-4119-a9db-88b7fe4f7f3c-kube-api-access-gkntc\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:16.952469 master-0 kubenswrapper[30420]: I0318 10:17:16.952067 30420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05094271-f491-4119-a9db-88b7fe4f7f3c-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:17.734024 master-0 kubenswrapper[30420]: I0318 10:17:17.733952 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74bbfbc495-9qrz2_05094271-f491-4119-a9db-88b7fe4f7f3c/console/0.log" Mar 18 10:17:17.734024 master-0 kubenswrapper[30420]: I0318 10:17:17.734026 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74bbfbc495-9qrz2" event={"ID":"05094271-f491-4119-a9db-88b7fe4f7f3c","Type":"ContainerDied","Data":"2a9332c92af92ef3b2ca251ca5e4141c5440002945b35af8ac6d12aad5abf66b"} Mar 18 10:17:17.735436 master-0 kubenswrapper[30420]: I0318 10:17:17.734070 30420 scope.go:117] "RemoveContainer" containerID="36d63db8f3c986cdfcd87575d271d8cb4ae85be80326c8340b5c3145f2f22ce5" Mar 18 10:17:17.735436 master-0 kubenswrapper[30420]: I0318 10:17:17.734116 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74bbfbc495-9qrz2" Mar 18 10:17:17.788499 master-0 kubenswrapper[30420]: I0318 10:17:17.788395 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-74bbfbc495-9qrz2"] Mar 18 10:17:17.802397 master-0 kubenswrapper[30420]: I0318 10:17:17.802294 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-74bbfbc495-9qrz2"] Mar 18 10:17:18.180935 master-0 kubenswrapper[30420]: I0318 10:17:18.180816 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05094271-f491-4119-a9db-88b7fe4f7f3c" path="/var/lib/kubelet/pods/05094271-f491-4119-a9db-88b7fe4f7f3c/volumes" Mar 18 10:17:18.934628 master-0 kubenswrapper[30420]: I0318 10:17:18.934556 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-66b8ffb895-wg4k5" Mar 18 10:17:19.120474 master-0 kubenswrapper[30420]: I0318 10:17:19.120392 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:17:19.164747 master-0 kubenswrapper[30420]: I0318 10:17:19.164671 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:17:19.790058 master-0 kubenswrapper[30420]: I0318 10:17:19.789994 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:17:19.996432 master-0 kubenswrapper[30420]: I0318 10:17:19.996383 30420 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:17:19.997116 master-0 kubenswrapper[30420]: I0318 10:17:19.996639 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="cluster-policy-controller" containerID="cri-o://1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e" gracePeriod=30 Mar 18 10:17:19.997116 master-0 kubenswrapper[30420]: I0318 10:17:19.996747 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530" gracePeriod=30 Mar 18 10:17:19.997116 master-0 kubenswrapper[30420]: I0318 10:17:19.996779 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" containerID="cri-o://ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde" gracePeriod=30 Mar 18 10:17:19.997116 master-0 kubenswrapper[30420]: I0318 10:17:19.996739 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97" gracePeriod=30 Mar 18 10:17:19.998915 master-0 kubenswrapper[30420]: I0318 10:17:19.998856 30420 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:17:19.999135 master-0 kubenswrapper[30420]: E0318 10:17:19.999107 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" Mar 18 10:17:19.999135 master-0 kubenswrapper[30420]: I0318 10:17:19.999121 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" Mar 18 10:17:19.999135 master-0 kubenswrapper[30420]: E0318 10:17:19.999134 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" Mar 18 10:17:19.999135 master-0 kubenswrapper[30420]: I0318 10:17:19.999140 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" Mar 18 10:17:19.999421 master-0 kubenswrapper[30420]: E0318 10:17:19.999152 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05094271-f491-4119-a9db-88b7fe4f7f3c" containerName="console" Mar 18 10:17:19.999421 master-0 kubenswrapper[30420]: I0318 10:17:19.999158 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="05094271-f491-4119-a9db-88b7fe4f7f3c" containerName="console" Mar 18 10:17:19.999421 master-0 kubenswrapper[30420]: E0318 10:17:19.999256 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager-recovery-controller" Mar 18 10:17:19.999421 master-0 kubenswrapper[30420]: I0318 10:17:19.999263 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager-recovery-controller" Mar 18 10:17:19.999421 master-0 kubenswrapper[30420]: E0318 10:17:19.999277 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="cluster-policy-controller" Mar 18 10:17:19.999421 master-0 kubenswrapper[30420]: I0318 10:17:19.999283 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="cluster-policy-controller" Mar 18 10:17:19.999421 master-0 kubenswrapper[30420]: E0318 10:17:19.999294 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager-cert-syncer" Mar 18 10:17:19.999421 master-0 kubenswrapper[30420]: I0318 10:17:19.999300 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager-cert-syncer" Mar 18 10:17:19.999784 master-0 kubenswrapper[30420]: I0318 10:17:19.999560 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager-cert-syncer" Mar 18 10:17:19.999784 master-0 kubenswrapper[30420]: I0318 10:17:19.999598 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" Mar 18 10:17:19.999784 master-0 kubenswrapper[30420]: I0318 10:17:19.999608 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="cluster-policy-controller" Mar 18 10:17:19.999784 master-0 kubenswrapper[30420]: I0318 10:17:19.999623 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager-recovery-controller" Mar 18 10:17:19.999784 master-0 kubenswrapper[30420]: I0318 10:17:19.999633 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="05094271-f491-4119-a9db-88b7fe4f7f3c" containerName="console" Mar 18 10:17:19.999784 master-0 kubenswrapper[30420]: I0318 10:17:19.999646 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" Mar 18 10:17:20.000154 master-0 kubenswrapper[30420]: E0318 10:17:19.999799 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" Mar 18 10:17:20.000154 master-0 kubenswrapper[30420]: I0318 10:17:19.999813 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" Mar 18 10:17:20.000154 master-0 kubenswrapper[30420]: I0318 10:17:20.000027 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ddfa5bb627414042dcc2d2204092c5a" containerName="kube-controller-manager" Mar 18 10:17:20.111684 master-0 kubenswrapper[30420]: I0318 10:17:20.111624 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/06de8c68c0832ab8f7d68e9aec6f9555-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"06de8c68c0832ab8f7d68e9aec6f9555\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:20.112021 master-0 kubenswrapper[30420]: I0318 10:17:20.111796 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/06de8c68c0832ab8f7d68e9aec6f9555-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"06de8c68c0832ab8f7d68e9aec6f9555\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:20.213294 master-0 kubenswrapper[30420]: I0318 10:17:20.213171 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/06de8c68c0832ab8f7d68e9aec6f9555-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"06de8c68c0832ab8f7d68e9aec6f9555\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:20.213294 master-0 kubenswrapper[30420]: I0318 10:17:20.213280 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/06de8c68c0832ab8f7d68e9aec6f9555-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"06de8c68c0832ab8f7d68e9aec6f9555\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:20.213747 master-0 kubenswrapper[30420]: I0318 10:17:20.213356 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/06de8c68c0832ab8f7d68e9aec6f9555-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"06de8c68c0832ab8f7d68e9aec6f9555\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:20.213747 master-0 kubenswrapper[30420]: I0318 10:17:20.213569 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/06de8c68c0832ab8f7d68e9aec6f9555-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"06de8c68c0832ab8f7d68e9aec6f9555\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:20.290569 master-0 kubenswrapper[30420]: I0318 10:17:20.290517 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager/1.log" Mar 18 10:17:20.291668 master-0 kubenswrapper[30420]: I0318 10:17:20.291609 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager-cert-syncer/0.log" Mar 18 10:17:20.292385 master-0 kubenswrapper[30420]: I0318 10:17:20.292284 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:20.296398 master-0 kubenswrapper[30420]: I0318 10:17:20.296350 30420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="3ddfa5bb627414042dcc2d2204092c5a" podUID="06de8c68c0832ab8f7d68e9aec6f9555" Mar 18 10:17:20.314659 master-0 kubenswrapper[30420]: I0318 10:17:20.314566 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-resource-dir\") pod \"3ddfa5bb627414042dcc2d2204092c5a\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " Mar 18 10:17:20.314659 master-0 kubenswrapper[30420]: I0318 10:17:20.314639 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3ddfa5bb627414042dcc2d2204092c5a" (UID: "3ddfa5bb627414042dcc2d2204092c5a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:17:20.315047 master-0 kubenswrapper[30420]: I0318 10:17:20.314760 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-cert-dir\") pod \"3ddfa5bb627414042dcc2d2204092c5a\" (UID: \"3ddfa5bb627414042dcc2d2204092c5a\") " Mar 18 10:17:20.315047 master-0 kubenswrapper[30420]: I0318 10:17:20.314902 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3ddfa5bb627414042dcc2d2204092c5a" (UID: "3ddfa5bb627414042dcc2d2204092c5a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:17:20.315353 master-0 kubenswrapper[30420]: I0318 10:17:20.315284 30420 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:20.315353 master-0 kubenswrapper[30420]: I0318 10:17:20.315315 30420 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3ddfa5bb627414042dcc2d2204092c5a-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:20.764282 master-0 kubenswrapper[30420]: I0318 10:17:20.764158 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager/1.log" Mar 18 10:17:20.765404 master-0 kubenswrapper[30420]: I0318 10:17:20.765373 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3ddfa5bb627414042dcc2d2204092c5a/kube-controller-manager-cert-syncer/0.log" Mar 18 10:17:20.765902 master-0 kubenswrapper[30420]: I0318 10:17:20.765867 30420 generic.go:334] "Generic (PLEG): container finished" podID="3ddfa5bb627414042dcc2d2204092c5a" containerID="ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde" exitCode=0 Mar 18 10:17:20.765902 master-0 kubenswrapper[30420]: I0318 10:17:20.765891 30420 generic.go:334] "Generic (PLEG): container finished" podID="3ddfa5bb627414042dcc2d2204092c5a" containerID="86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97" exitCode=0 Mar 18 10:17:20.765902 master-0 kubenswrapper[30420]: I0318 10:17:20.765900 30420 generic.go:334] "Generic (PLEG): container finished" podID="3ddfa5bb627414042dcc2d2204092c5a" containerID="a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530" exitCode=2 Mar 18 10:17:20.765902 master-0 kubenswrapper[30420]: I0318 10:17:20.765906 30420 generic.go:334] "Generic (PLEG): container finished" podID="3ddfa5bb627414042dcc2d2204092c5a" containerID="1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e" exitCode=0 Mar 18 10:17:20.766109 master-0 kubenswrapper[30420]: I0318 10:17:20.765958 30420 scope.go:117] "RemoveContainer" containerID="ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde" Mar 18 10:17:20.766109 master-0 kubenswrapper[30420]: I0318 10:17:20.766051 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:20.770343 master-0 kubenswrapper[30420]: I0318 10:17:20.770282 30420 generic.go:334] "Generic (PLEG): container finished" podID="2be13c7e-ab8c-43a4-ad8e-4ef8fd233348" containerID="5ec20a8c23e21367e9d103100e2a4bdf8b14e279057d3c18d3ce728c07d6f81f" exitCode=0 Mar 18 10:17:20.771609 master-0 kubenswrapper[30420]: I0318 10:17:20.771565 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348","Type":"ContainerDied","Data":"5ec20a8c23e21367e9d103100e2a4bdf8b14e279057d3c18d3ce728c07d6f81f"} Mar 18 10:17:20.773720 master-0 kubenswrapper[30420]: I0318 10:17:20.773669 30420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="3ddfa5bb627414042dcc2d2204092c5a" podUID="06de8c68c0832ab8f7d68e9aec6f9555" Mar 18 10:17:20.790158 master-0 kubenswrapper[30420]: I0318 10:17:20.790005 30420 scope.go:117] "RemoveContainer" containerID="6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f" Mar 18 10:17:20.805325 master-0 kubenswrapper[30420]: I0318 10:17:20.805219 30420 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="3ddfa5bb627414042dcc2d2204092c5a" podUID="06de8c68c0832ab8f7d68e9aec6f9555" Mar 18 10:17:20.823244 master-0 kubenswrapper[30420]: I0318 10:17:20.823199 30420 scope.go:117] "RemoveContainer" containerID="86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97" Mar 18 10:17:20.841710 master-0 kubenswrapper[30420]: I0318 10:17:20.841655 30420 scope.go:117] "RemoveContainer" containerID="a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530" Mar 18 10:17:20.860588 master-0 kubenswrapper[30420]: I0318 10:17:20.860514 30420 scope.go:117] "RemoveContainer" containerID="1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e" Mar 18 10:17:20.880335 master-0 kubenswrapper[30420]: I0318 10:17:20.880262 30420 scope.go:117] "RemoveContainer" containerID="ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde" Mar 18 10:17:20.881755 master-0 kubenswrapper[30420]: E0318 10:17:20.881621 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde\": container with ID starting with ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde not found: ID does not exist" containerID="ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde" Mar 18 10:17:20.881755 master-0 kubenswrapper[30420]: I0318 10:17:20.881707 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde"} err="failed to get container status \"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde\": rpc error: code = NotFound desc = could not find container \"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde\": container with ID starting with ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde not found: ID does not exist" Mar 18 10:17:20.881755 master-0 kubenswrapper[30420]: I0318 10:17:20.881752 30420 scope.go:117] "RemoveContainer" containerID="6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f" Mar 18 10:17:20.882586 master-0 kubenswrapper[30420]: E0318 10:17:20.882525 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f\": container with ID starting with 6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f not found: ID does not exist" containerID="6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f" Mar 18 10:17:20.882698 master-0 kubenswrapper[30420]: I0318 10:17:20.882583 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f"} err="failed to get container status \"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f\": rpc error: code = NotFound desc = could not find container \"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f\": container with ID starting with 6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f not found: ID does not exist" Mar 18 10:17:20.882698 master-0 kubenswrapper[30420]: I0318 10:17:20.882622 30420 scope.go:117] "RemoveContainer" containerID="86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97" Mar 18 10:17:20.883078 master-0 kubenswrapper[30420]: E0318 10:17:20.882995 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97\": container with ID starting with 86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97 not found: ID does not exist" containerID="86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97" Mar 18 10:17:20.883078 master-0 kubenswrapper[30420]: I0318 10:17:20.883044 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97"} err="failed to get container status \"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97\": rpc error: code = NotFound desc = could not find container \"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97\": container with ID starting with 86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97 not found: ID does not exist" Mar 18 10:17:20.883530 master-0 kubenswrapper[30420]: I0318 10:17:20.883081 30420 scope.go:117] "RemoveContainer" containerID="a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530" Mar 18 10:17:20.883611 master-0 kubenswrapper[30420]: E0318 10:17:20.883521 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530\": container with ID starting with a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530 not found: ID does not exist" containerID="a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530" Mar 18 10:17:20.883611 master-0 kubenswrapper[30420]: I0318 10:17:20.883560 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530"} err="failed to get container status \"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530\": rpc error: code = NotFound desc = could not find container \"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530\": container with ID starting with a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530 not found: ID does not exist" Mar 18 10:17:20.883611 master-0 kubenswrapper[30420]: I0318 10:17:20.883588 30420 scope.go:117] "RemoveContainer" containerID="1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e" Mar 18 10:17:20.884040 master-0 kubenswrapper[30420]: E0318 10:17:20.884000 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e\": container with ID starting with 1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e not found: ID does not exist" containerID="1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e" Mar 18 10:17:20.884105 master-0 kubenswrapper[30420]: I0318 10:17:20.884045 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e"} err="failed to get container status \"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e\": rpc error: code = NotFound desc = could not find container \"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e\": container with ID starting with 1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e not found: ID does not exist" Mar 18 10:17:20.884105 master-0 kubenswrapper[30420]: I0318 10:17:20.884078 30420 scope.go:117] "RemoveContainer" containerID="ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde" Mar 18 10:17:20.884438 master-0 kubenswrapper[30420]: I0318 10:17:20.884399 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde"} err="failed to get container status \"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde\": rpc error: code = NotFound desc = could not find container \"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde\": container with ID starting with ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde not found: ID does not exist" Mar 18 10:17:20.884511 master-0 kubenswrapper[30420]: I0318 10:17:20.884442 30420 scope.go:117] "RemoveContainer" containerID="6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f" Mar 18 10:17:20.884890 master-0 kubenswrapper[30420]: I0318 10:17:20.884849 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f"} err="failed to get container status \"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f\": rpc error: code = NotFound desc = could not find container \"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f\": container with ID starting with 6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f not found: ID does not exist" Mar 18 10:17:20.885151 master-0 kubenswrapper[30420]: I0318 10:17:20.884889 30420 scope.go:117] "RemoveContainer" containerID="86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97" Mar 18 10:17:20.885288 master-0 kubenswrapper[30420]: I0318 10:17:20.885247 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97"} err="failed to get container status \"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97\": rpc error: code = NotFound desc = could not find container \"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97\": container with ID starting with 86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97 not found: ID does not exist" Mar 18 10:17:20.885288 master-0 kubenswrapper[30420]: I0318 10:17:20.885284 30420 scope.go:117] "RemoveContainer" containerID="a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530" Mar 18 10:17:20.885606 master-0 kubenswrapper[30420]: I0318 10:17:20.885550 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530"} err="failed to get container status \"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530\": rpc error: code = NotFound desc = could not find container \"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530\": container with ID starting with a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530 not found: ID does not exist" Mar 18 10:17:20.885606 master-0 kubenswrapper[30420]: I0318 10:17:20.885581 30420 scope.go:117] "RemoveContainer" containerID="1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e" Mar 18 10:17:20.886404 master-0 kubenswrapper[30420]: I0318 10:17:20.886317 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e"} err="failed to get container status \"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e\": rpc error: code = NotFound desc = could not find container \"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e\": container with ID starting with 1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e not found: ID does not exist" Mar 18 10:17:20.886404 master-0 kubenswrapper[30420]: I0318 10:17:20.886352 30420 scope.go:117] "RemoveContainer" containerID="ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde" Mar 18 10:17:20.886734 master-0 kubenswrapper[30420]: I0318 10:17:20.886682 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde"} err="failed to get container status \"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde\": rpc error: code = NotFound desc = could not find container \"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde\": container with ID starting with ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde not found: ID does not exist" Mar 18 10:17:20.886734 master-0 kubenswrapper[30420]: I0318 10:17:20.886720 30420 scope.go:117] "RemoveContainer" containerID="6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f" Mar 18 10:17:20.887300 master-0 kubenswrapper[30420]: I0318 10:17:20.887230 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f"} err="failed to get container status \"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f\": rpc error: code = NotFound desc = could not find container \"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f\": container with ID starting with 6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f not found: ID does not exist" Mar 18 10:17:20.887300 master-0 kubenswrapper[30420]: I0318 10:17:20.887274 30420 scope.go:117] "RemoveContainer" containerID="86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97" Mar 18 10:17:20.887998 master-0 kubenswrapper[30420]: I0318 10:17:20.887953 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97"} err="failed to get container status \"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97\": rpc error: code = NotFound desc = could not find container \"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97\": container with ID starting with 86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97 not found: ID does not exist" Mar 18 10:17:20.887998 master-0 kubenswrapper[30420]: I0318 10:17:20.887983 30420 scope.go:117] "RemoveContainer" containerID="a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530" Mar 18 10:17:20.888477 master-0 kubenswrapper[30420]: I0318 10:17:20.888426 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530"} err="failed to get container status \"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530\": rpc error: code = NotFound desc = could not find container \"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530\": container with ID starting with a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530 not found: ID does not exist" Mar 18 10:17:20.888477 master-0 kubenswrapper[30420]: I0318 10:17:20.888465 30420 scope.go:117] "RemoveContainer" containerID="1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e" Mar 18 10:17:20.889004 master-0 kubenswrapper[30420]: I0318 10:17:20.888968 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e"} err="failed to get container status \"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e\": rpc error: code = NotFound desc = could not find container \"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e\": container with ID starting with 1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e not found: ID does not exist" Mar 18 10:17:20.889004 master-0 kubenswrapper[30420]: I0318 10:17:20.889000 30420 scope.go:117] "RemoveContainer" containerID="ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde" Mar 18 10:17:20.889371 master-0 kubenswrapper[30420]: I0318 10:17:20.889322 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde"} err="failed to get container status \"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde\": rpc error: code = NotFound desc = could not find container \"ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde\": container with ID starting with ccd6cbf6d4ce935b58d5a48e8e114cb176bae107aae7a2468a9c3a9b21d51cde not found: ID does not exist" Mar 18 10:17:20.889463 master-0 kubenswrapper[30420]: I0318 10:17:20.889372 30420 scope.go:117] "RemoveContainer" containerID="6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f" Mar 18 10:17:20.889865 master-0 kubenswrapper[30420]: I0318 10:17:20.889793 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f"} err="failed to get container status \"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f\": rpc error: code = NotFound desc = could not find container \"6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f\": container with ID starting with 6262b65beb6f2b9683bc9235394d7e13025f8660d9fd4a525d7b5aaf9d248d9f not found: ID does not exist" Mar 18 10:17:20.889865 master-0 kubenswrapper[30420]: I0318 10:17:20.889839 30420 scope.go:117] "RemoveContainer" containerID="86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97" Mar 18 10:17:20.890329 master-0 kubenswrapper[30420]: I0318 10:17:20.890286 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97"} err="failed to get container status \"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97\": rpc error: code = NotFound desc = could not find container \"86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97\": container with ID starting with 86d62eb7c2cf9bbf042a562045a7b7ee2ca62e629ba61cfe06638c2f84729c97 not found: ID does not exist" Mar 18 10:17:20.890329 master-0 kubenswrapper[30420]: I0318 10:17:20.890312 30420 scope.go:117] "RemoveContainer" containerID="a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530" Mar 18 10:17:20.890625 master-0 kubenswrapper[30420]: I0318 10:17:20.890572 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530"} err="failed to get container status \"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530\": rpc error: code = NotFound desc = could not find container \"a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530\": container with ID starting with a6b636ab88d31a6f628201f90c1cf0b74da0bfac5e5f726055bbd0d041527530 not found: ID does not exist" Mar 18 10:17:20.890625 master-0 kubenswrapper[30420]: I0318 10:17:20.890616 30420 scope.go:117] "RemoveContainer" containerID="1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e" Mar 18 10:17:20.891097 master-0 kubenswrapper[30420]: I0318 10:17:20.891051 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e"} err="failed to get container status \"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e\": rpc error: code = NotFound desc = could not find container \"1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e\": container with ID starting with 1bf60bf33cfc23f5f7f85a209ad5473b9719031ea092727003128b81f69dfc9e not found: ID does not exist" Mar 18 10:17:21.122799 master-0 kubenswrapper[30420]: I0318 10:17:21.122734 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:17:21.127414 master-0 kubenswrapper[30420]: I0318 10:17:21.127320 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:17:21.615009 master-0 kubenswrapper[30420]: I0318 10:17:21.614952 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:21.616196 master-0 kubenswrapper[30420]: I0318 10:17:21.616094 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:21.624436 master-0 kubenswrapper[30420]: I0318 10:17:21.624391 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:21.792787 master-0 kubenswrapper[30420]: I0318 10:17:21.792728 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:17:22.179259 master-0 kubenswrapper[30420]: I0318 10:17:22.179105 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ddfa5bb627414042dcc2d2204092c5a" path="/var/lib/kubelet/pods/3ddfa5bb627414042dcc2d2204092c5a/volumes" Mar 18 10:17:22.296471 master-0 kubenswrapper[30420]: I0318 10:17:22.296420 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:17:22.357219 master-0 kubenswrapper[30420]: I0318 10:17:22.357135 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kubelet-dir\") pod \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " Mar 18 10:17:22.357410 master-0 kubenswrapper[30420]: I0318 10:17:22.357240 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kube-api-access\") pod \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " Mar 18 10:17:22.357410 master-0 kubenswrapper[30420]: I0318 10:17:22.357275 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-var-lock\") pod \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\" (UID: \"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348\") " Mar 18 10:17:22.357624 master-0 kubenswrapper[30420]: I0318 10:17:22.357479 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-var-lock" (OuterVolumeSpecName: "var-lock") pod "2be13c7e-ab8c-43a4-ad8e-4ef8fd233348" (UID: "2be13c7e-ab8c-43a4-ad8e-4ef8fd233348"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:17:22.357624 master-0 kubenswrapper[30420]: I0318 10:17:22.357504 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2be13c7e-ab8c-43a4-ad8e-4ef8fd233348" (UID: "2be13c7e-ab8c-43a4-ad8e-4ef8fd233348"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 10:17:22.360642 master-0 kubenswrapper[30420]: I0318 10:17:22.360595 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2be13c7e-ab8c-43a4-ad8e-4ef8fd233348" (UID: "2be13c7e-ab8c-43a4-ad8e-4ef8fd233348"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:17:22.459488 master-0 kubenswrapper[30420]: I0318 10:17:22.459340 30420 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:22.459488 master-0 kubenswrapper[30420]: I0318 10:17:22.459405 30420 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:22.459488 master-0 kubenswrapper[30420]: I0318 10:17:22.459431 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2be13c7e-ab8c-43a4-ad8e-4ef8fd233348-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:22.796573 master-0 kubenswrapper[30420]: I0318 10:17:22.796426 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2be13c7e-ab8c-43a4-ad8e-4ef8fd233348","Type":"ContainerDied","Data":"70934370d695eca554e46b1bb0b2b8cc28acb5193a15eb7f9ae3352d31d135b9"} Mar 18 10:17:22.797071 master-0 kubenswrapper[30420]: I0318 10:17:22.796602 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70934370d695eca554e46b1bb0b2b8cc28acb5193a15eb7f9ae3352d31d135b9" Mar 18 10:17:22.797071 master-0 kubenswrapper[30420]: I0318 10:17:22.796608 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 10:17:32.167160 master-0 kubenswrapper[30420]: I0318 10:17:32.167025 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:32.203278 master-0 kubenswrapper[30420]: I0318 10:17:32.203222 30420 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3d803429-4d5c-4527-a5e8-6ceccfb9bc22" Mar 18 10:17:32.203278 master-0 kubenswrapper[30420]: I0318 10:17:32.203273 30420 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3d803429-4d5c-4527-a5e8-6ceccfb9bc22" Mar 18 10:17:32.218799 master-0 kubenswrapper[30420]: I0318 10:17:32.218734 30420 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:32.227460 master-0 kubenswrapper[30420]: I0318 10:17:32.227394 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:17:32.234008 master-0 kubenswrapper[30420]: I0318 10:17:32.233952 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:32.238513 master-0 kubenswrapper[30420]: I0318 10:17:32.238451 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:17:32.244967 master-0 kubenswrapper[30420]: I0318 10:17:32.244896 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 10:17:32.255149 master-0 kubenswrapper[30420]: W0318 10:17:32.255085 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06de8c68c0832ab8f7d68e9aec6f9555.slice/crio-e32d8b5a0959970666c4b3d09dd8c7045f47fed61fdb3083ffe6f268a7cb3a28 WatchSource:0}: Error finding container e32d8b5a0959970666c4b3d09dd8c7045f47fed61fdb3083ffe6f268a7cb3a28: Status 404 returned error can't find the container with id e32d8b5a0959970666c4b3d09dd8c7045f47fed61fdb3083ffe6f268a7cb3a28 Mar 18 10:17:32.884640 master-0 kubenswrapper[30420]: I0318 10:17:32.884555 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"06de8c68c0832ab8f7d68e9aec6f9555","Type":"ContainerStarted","Data":"80a9362bed3132ed0f82b30c7ecfd3d8cdc4778cfbb3d8abb7c0c64bd8648dfc"} Mar 18 10:17:32.884640 master-0 kubenswrapper[30420]: I0318 10:17:32.884639 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"06de8c68c0832ab8f7d68e9aec6f9555","Type":"ContainerStarted","Data":"e32d8b5a0959970666c4b3d09dd8c7045f47fed61fdb3083ffe6f268a7cb3a28"} Mar 18 10:17:33.896009 master-0 kubenswrapper[30420]: I0318 10:17:33.895956 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"06de8c68c0832ab8f7d68e9aec6f9555","Type":"ContainerStarted","Data":"ebcf9ff49ec212c7622a5a6d391b5ab612d9d50d00fbb73ef82be363db7b602e"} Mar 18 10:17:33.896626 master-0 kubenswrapper[30420]: I0318 10:17:33.896002 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"06de8c68c0832ab8f7d68e9aec6f9555","Type":"ContainerStarted","Data":"c02f72c1270c1661d512b148667fe5464a36d63407642162eba5eb014d4a873c"} Mar 18 10:17:33.896626 master-0 kubenswrapper[30420]: I0318 10:17:33.896070 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"06de8c68c0832ab8f7d68e9aec6f9555","Type":"ContainerStarted","Data":"8ca24eab8fef18c814d01ab59dd263256d426074d27c723467898a7c4e7cd88d"} Mar 18 10:17:36.312286 master-0 kubenswrapper[30420]: I0318 10:17:36.312167 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7ff9bc57fc-q5plp" podUID="6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" containerName="console" containerID="cri-o://854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468" gracePeriod=15 Mar 18 10:17:36.768759 master-0 kubenswrapper[30420]: I0318 10:17:36.768701 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7ff9bc57fc-q5plp_6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985/console/0.log" Mar 18 10:17:36.768987 master-0 kubenswrapper[30420]: I0318 10:17:36.768772 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:17:36.789096 master-0 kubenswrapper[30420]: I0318 10:17:36.789012 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=4.788989829 podStartE2EDuration="4.788989829s" podCreationTimestamp="2026-03-18 10:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:17:33.912494618 +0000 UTC m=+417.965240547" watchObservedRunningTime="2026-03-18 10:17:36.788989829 +0000 UTC m=+420.841735768" Mar 18 10:17:36.837158 master-0 kubenswrapper[30420]: I0318 10:17:36.837045 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-serving-cert\") pod \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " Mar 18 10:17:36.837158 master-0 kubenswrapper[30420]: I0318 10:17:36.837107 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-config\") pod \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " Mar 18 10:17:36.837158 master-0 kubenswrapper[30420]: I0318 10:17:36.837139 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-trusted-ca-bundle\") pod \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " Mar 18 10:17:36.837455 master-0 kubenswrapper[30420]: I0318 10:17:36.837262 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-service-ca\") pod \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " Mar 18 10:17:36.837455 master-0 kubenswrapper[30420]: I0318 10:17:36.837362 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-oauth-config\") pod \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " Mar 18 10:17:36.837455 master-0 kubenswrapper[30420]: I0318 10:17:36.837409 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwbnw\" (UniqueName: \"kubernetes.io/projected/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-kube-api-access-lwbnw\") pod \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " Mar 18 10:17:36.837455 master-0 kubenswrapper[30420]: I0318 10:17:36.837434 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-oauth-serving-cert\") pod \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\" (UID: \"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985\") " Mar 18 10:17:36.837676 master-0 kubenswrapper[30420]: I0318 10:17:36.837547 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-config" (OuterVolumeSpecName: "console-config") pod "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" (UID: "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:36.837854 master-0 kubenswrapper[30420]: I0318 10:17:36.837833 30420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:36.837854 master-0 kubenswrapper[30420]: I0318 10:17:36.837812 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-service-ca" (OuterVolumeSpecName: "service-ca") pod "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" (UID: "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:36.838020 master-0 kubenswrapper[30420]: I0318 10:17:36.837883 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" (UID: "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:36.838384 master-0 kubenswrapper[30420]: I0318 10:17:36.838332 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" (UID: "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:36.839592 master-0 kubenswrapper[30420]: I0318 10:17:36.839556 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" (UID: "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:36.840042 master-0 kubenswrapper[30420]: I0318 10:17:36.840009 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" (UID: "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:36.841042 master-0 kubenswrapper[30420]: I0318 10:17:36.841002 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-kube-api-access-lwbnw" (OuterVolumeSpecName: "kube-api-access-lwbnw") pod "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" (UID: "6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985"). InnerVolumeSpecName "kube-api-access-lwbnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:17:36.927879 master-0 kubenswrapper[30420]: I0318 10:17:36.927762 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7ff9bc57fc-q5plp_6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985/console/0.log" Mar 18 10:17:36.927879 master-0 kubenswrapper[30420]: I0318 10:17:36.927837 30420 generic.go:334] "Generic (PLEG): container finished" podID="6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" containerID="854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468" exitCode=2 Mar 18 10:17:36.928083 master-0 kubenswrapper[30420]: I0318 10:17:36.927876 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7ff9bc57fc-q5plp" event={"ID":"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985","Type":"ContainerDied","Data":"854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468"} Mar 18 10:17:36.928083 master-0 kubenswrapper[30420]: I0318 10:17:36.927906 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7ff9bc57fc-q5plp" event={"ID":"6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985","Type":"ContainerDied","Data":"3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971"} Mar 18 10:17:36.928083 master-0 kubenswrapper[30420]: I0318 10:17:36.927928 30420 scope.go:117] "RemoveContainer" containerID="854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468" Mar 18 10:17:36.928083 master-0 kubenswrapper[30420]: I0318 10:17:36.928046 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7ff9bc57fc-q5plp" Mar 18 10:17:36.939773 master-0 kubenswrapper[30420]: I0318 10:17:36.939721 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwbnw\" (UniqueName: \"kubernetes.io/projected/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-kube-api-access-lwbnw\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:36.939773 master-0 kubenswrapper[30420]: I0318 10:17:36.939772 30420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:36.939961 master-0 kubenswrapper[30420]: I0318 10:17:36.939789 30420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:36.939961 master-0 kubenswrapper[30420]: I0318 10:17:36.939803 30420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:36.939961 master-0 kubenswrapper[30420]: I0318 10:17:36.939817 30420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:36.939961 master-0 kubenswrapper[30420]: I0318 10:17:36.939856 30420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:36.952031 master-0 kubenswrapper[30420]: I0318 10:17:36.951992 30420 scope.go:117] "RemoveContainer" containerID="854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468" Mar 18 10:17:36.952520 master-0 kubenswrapper[30420]: E0318 10:17:36.952455 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468\": container with ID starting with 854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468 not found: ID does not exist" containerID="854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468" Mar 18 10:17:36.952596 master-0 kubenswrapper[30420]: I0318 10:17:36.952532 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468"} err="failed to get container status \"854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468\": rpc error: code = NotFound desc = could not find container \"854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468\": container with ID starting with 854e7f8ecf3d24e75b2eab40318e503bdd48171ac04595739f97ec4df6a8a468 not found: ID does not exist" Mar 18 10:17:36.991249 master-0 kubenswrapper[30420]: I0318 10:17:36.991158 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7ff9bc57fc-q5plp"] Mar 18 10:17:37.006808 master-0 kubenswrapper[30420]: I0318 10:17:37.003906 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7ff9bc57fc-q5plp"] Mar 18 10:17:37.700220 master-0 kubenswrapper[30420]: E0318 10:17:37.700130 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache]" Mar 18 10:17:37.700220 master-0 kubenswrapper[30420]: E0318 10:17:37.700249 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache]" Mar 18 10:17:38.175660 master-0 kubenswrapper[30420]: I0318 10:17:38.175600 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" path="/var/lib/kubelet/pods/6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985/volumes" Mar 18 10:17:42.234221 master-0 kubenswrapper[30420]: I0318 10:17:42.234169 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:42.234221 master-0 kubenswrapper[30420]: I0318 10:17:42.234224 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:42.234221 master-0 kubenswrapper[30420]: I0318 10:17:42.234236 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:42.234804 master-0 kubenswrapper[30420]: I0318 10:17:42.234245 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:42.235352 master-0 kubenswrapper[30420]: I0318 10:17:42.235324 30420 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 10:17:42.235504 master-0 kubenswrapper[30420]: I0318 10:17:42.235480 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="06de8c68c0832ab8f7d68e9aec6f9555" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 10:17:42.237838 master-0 kubenswrapper[30420]: I0318 10:17:42.237809 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:43.184632 master-0 kubenswrapper[30420]: E0318 10:17:43.184586 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache]" Mar 18 10:17:43.426141 master-0 kubenswrapper[30420]: I0318 10:17:43.426087 30420 patch_prober.go:28] interesting pod/metrics-server-74c475bc87-xx98m container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.76:10250/livez\": dial tcp 10.128.0.76:10250: connect: connection refused" start-of-body= Mar 18 10:17:43.426730 master-0 kubenswrapper[30420]: I0318 10:17:43.426158 30420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" podUID="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" containerName="metrics-server" probeResult="failure" output="Get \"https://10.128.0.76:10250/livez\": dial tcp 10.128.0.76:10250: connect: connection refused" Mar 18 10:17:43.708540 master-0 kubenswrapper[30420]: I0318 10:17:43.707423 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:17:43.756710 master-0 kubenswrapper[30420]: I0318 10:17:43.756618 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles\") pod \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " Mar 18 10:17:43.756710 master-0 kubenswrapper[30420]: I0318 10:17:43.756669 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs\") pod \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " Mar 18 10:17:43.756710 master-0 kubenswrapper[30420]: I0318 10:17:43.756719 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle\") pod \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " Mar 18 10:17:43.757156 master-0 kubenswrapper[30420]: I0318 10:17:43.756764 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqx6m\" (UniqueName: \"kubernetes.io/projected/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-kube-api-access-fqx6m\") pod \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " Mar 18 10:17:43.757156 master-0 kubenswrapper[30420]: I0318 10:17:43.756804 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-audit-log\") pod \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " Mar 18 10:17:43.758741 master-0 kubenswrapper[30420]: I0318 10:17:43.757365 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-audit-log" (OuterVolumeSpecName: "audit-log") pod "106fc2a2-9e7b-4f86-94b8-b1a1906646d8" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:17:43.758741 master-0 kubenswrapper[30420]: I0318 10:17:43.757433 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle\") pod \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " Mar 18 10:17:43.758741 master-0 kubenswrapper[30420]: I0318 10:17:43.757615 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls\") pod \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\" (UID: \"106fc2a2-9e7b-4f86-94b8-b1a1906646d8\") " Mar 18 10:17:43.758741 master-0 kubenswrapper[30420]: I0318 10:17:43.757761 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "106fc2a2-9e7b-4f86-94b8-b1a1906646d8" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:43.758741 master-0 kubenswrapper[30420]: I0318 10:17:43.757853 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "106fc2a2-9e7b-4f86-94b8-b1a1906646d8" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:17:43.758741 master-0 kubenswrapper[30420]: I0318 10:17:43.758572 30420 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:43.758741 master-0 kubenswrapper[30420]: I0318 10:17:43.758630 30420 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:43.758741 master-0 kubenswrapper[30420]: I0318 10:17:43.758658 30420 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:43.762590 master-0 kubenswrapper[30420]: I0318 10:17:43.762514 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "106fc2a2-9e7b-4f86-94b8-b1a1906646d8" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:43.762724 master-0 kubenswrapper[30420]: I0318 10:17:43.762635 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "106fc2a2-9e7b-4f86-94b8-b1a1906646d8" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:43.764149 master-0 kubenswrapper[30420]: I0318 10:17:43.764083 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-kube-api-access-fqx6m" (OuterVolumeSpecName: "kube-api-access-fqx6m") pod "106fc2a2-9e7b-4f86-94b8-b1a1906646d8" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8"). InnerVolumeSpecName "kube-api-access-fqx6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:17:43.767788 master-0 kubenswrapper[30420]: I0318 10:17:43.767731 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "106fc2a2-9e7b-4f86-94b8-b1a1906646d8" (UID: "106fc2a2-9e7b-4f86-94b8-b1a1906646d8"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:17:43.861035 master-0 kubenswrapper[30420]: I0318 10:17:43.860937 30420 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:43.861035 master-0 kubenswrapper[30420]: I0318 10:17:43.861004 30420 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:43.861035 master-0 kubenswrapper[30420]: I0318 10:17:43.861027 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqx6m\" (UniqueName: \"kubernetes.io/projected/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-kube-api-access-fqx6m\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:43.861035 master-0 kubenswrapper[30420]: I0318 10:17:43.861048 30420 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106fc2a2-9e7b-4f86-94b8-b1a1906646d8-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:17:44.016050 master-0 kubenswrapper[30420]: I0318 10:17:44.015885 30420 generic.go:334] "Generic (PLEG): container finished" podID="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" containerID="aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0" exitCode=0 Mar 18 10:17:44.016050 master-0 kubenswrapper[30420]: I0318 10:17:44.015940 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" event={"ID":"106fc2a2-9e7b-4f86-94b8-b1a1906646d8","Type":"ContainerDied","Data":"aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0"} Mar 18 10:17:44.016050 master-0 kubenswrapper[30420]: I0318 10:17:44.015970 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" Mar 18 10:17:44.016050 master-0 kubenswrapper[30420]: I0318 10:17:44.016001 30420 scope.go:117] "RemoveContainer" containerID="aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0" Mar 18 10:17:44.016517 master-0 kubenswrapper[30420]: I0318 10:17:44.015989 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-74c475bc87-xx98m" event={"ID":"106fc2a2-9e7b-4f86-94b8-b1a1906646d8","Type":"ContainerDied","Data":"adda5560398a1e9cd1248ce8d3ae8608ee224ce0ee349c65f7682b313879aa78"} Mar 18 10:17:44.056402 master-0 kubenswrapper[30420]: I0318 10:17:44.056328 30420 scope.go:117] "RemoveContainer" containerID="aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0" Mar 18 10:17:44.056938 master-0 kubenswrapper[30420]: E0318 10:17:44.056872 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0\": container with ID starting with aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0 not found: ID does not exist" containerID="aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0" Mar 18 10:17:44.057101 master-0 kubenswrapper[30420]: I0318 10:17:44.056932 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0"} err="failed to get container status \"aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0\": rpc error: code = NotFound desc = could not find container \"aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0\": container with ID starting with aae90b8b3fa095e8ace85536181927d10e6ffa9bfdc0f83781e48ac6ccdad6c0 not found: ID does not exist" Mar 18 10:17:44.078528 master-0 kubenswrapper[30420]: I0318 10:17:44.078443 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-74c475bc87-xx98m"] Mar 18 10:17:44.085936 master-0 kubenswrapper[30420]: I0318 10:17:44.085773 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-74c475bc87-xx98m"] Mar 18 10:17:44.176601 master-0 kubenswrapper[30420]: I0318 10:17:44.176543 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" path="/var/lib/kubelet/pods/106fc2a2-9e7b-4f86-94b8-b1a1906646d8/volumes" Mar 18 10:17:44.580479 master-0 kubenswrapper[30420]: E0318 10:17:44.580402 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache]" Mar 18 10:17:52.234970 master-0 kubenswrapper[30420]: I0318 10:17:52.234885 30420 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 10:17:52.234970 master-0 kubenswrapper[30420]: I0318 10:17:52.234962 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="06de8c68c0832ab8f7d68e9aec6f9555" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 10:17:52.239669 master-0 kubenswrapper[30420]: I0318 10:17:52.239612 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:17:53.352312 master-0 kubenswrapper[30420]: E0318 10:17:53.352244 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache]" Mar 18 10:17:59.720777 master-0 kubenswrapper[30420]: E0318 10:17:59.720708 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache]" Mar 18 10:18:02.234773 master-0 kubenswrapper[30420]: I0318 10:18:02.234529 30420 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 10:18:02.235574 master-0 kubenswrapper[30420]: I0318 10:18:02.235467 30420 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="06de8c68c0832ab8f7d68e9aec6f9555" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 10:18:02.235574 master-0 kubenswrapper[30420]: I0318 10:18:02.235551 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:18:02.239117 master-0 kubenswrapper[30420]: I0318 10:18:02.239021 30420 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"80a9362bed3132ed0f82b30c7ecfd3d8cdc4778cfbb3d8abb7c0c64bd8648dfc"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 10:18:02.240079 master-0 kubenswrapper[30420]: I0318 10:18:02.239387 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="06de8c68c0832ab8f7d68e9aec6f9555" containerName="kube-controller-manager" containerID="cri-o://80a9362bed3132ed0f82b30c7ecfd3d8cdc4778cfbb3d8abb7c0c64bd8648dfc" gracePeriod=30 Mar 18 10:18:03.389106 master-0 kubenswrapper[30420]: E0318 10:18:03.389020 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache]" Mar 18 10:18:13.618360 master-0 kubenswrapper[30420]: E0318 10:18:13.618285 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache]" Mar 18 10:18:14.581543 master-0 kubenswrapper[30420]: E0318 10:18:14.581415 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache]" Mar 18 10:18:23.839734 master-0 kubenswrapper[30420]: E0318 10:18:23.839587 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache]" Mar 18 10:18:29.719519 master-0 kubenswrapper[30420]: E0318 10:18:29.719446 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache]" Mar 18 10:18:32.481219 master-0 kubenswrapper[30420]: I0318 10:18:32.481058 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_06de8c68c0832ab8f7d68e9aec6f9555/kube-controller-manager/0.log" Mar 18 10:18:32.481219 master-0 kubenswrapper[30420]: I0318 10:18:32.481153 30420 generic.go:334] "Generic (PLEG): container finished" podID="06de8c68c0832ab8f7d68e9aec6f9555" containerID="80a9362bed3132ed0f82b30c7ecfd3d8cdc4778cfbb3d8abb7c0c64bd8648dfc" exitCode=137 Mar 18 10:18:32.481219 master-0 kubenswrapper[30420]: I0318 10:18:32.481196 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"06de8c68c0832ab8f7d68e9aec6f9555","Type":"ContainerDied","Data":"80a9362bed3132ed0f82b30c7ecfd3d8cdc4778cfbb3d8abb7c0c64bd8648dfc"} Mar 18 10:18:33.494580 master-0 kubenswrapper[30420]: I0318 10:18:33.494532 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_06de8c68c0832ab8f7d68e9aec6f9555/kube-controller-manager/0.log" Mar 18 10:18:33.495116 master-0 kubenswrapper[30420]: I0318 10:18:33.494598 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"06de8c68c0832ab8f7d68e9aec6f9555","Type":"ContainerStarted","Data":"3e16adfa3b6710be539e0232f53aa6c36bbaa1316d705a146e83654363489b6d"} Mar 18 10:18:33.891431 master-0 kubenswrapper[30420]: E0318 10:18:33.891349 30420 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2feb1_eff5_4ab6_a6e8_df10ff8c6985.slice/crio-3da872bb6c93c87a71567fbc7cb2b38afb3d6b3107b781693000b9bb6d2b8971\": RecentStats: unable to find data in memory cache]" Mar 18 10:18:42.234403 master-0 kubenswrapper[30420]: I0318 10:18:42.234238 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:18:42.234403 master-0 kubenswrapper[30420]: I0318 10:18:42.234299 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:18:42.237845 master-0 kubenswrapper[30420]: I0318 10:18:42.237784 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:18:42.589390 master-0 kubenswrapper[30420]: I0318 10:18:42.589328 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 10:18:51.576883 master-0 kubenswrapper[30420]: I0318 10:18:51.576812 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 10:18:51.577583 master-0 kubenswrapper[30420]: I0318 10:18:51.577248 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="prometheus" containerID="cri-o://a7d7f851f4c1584aa500215adc79e22cec4d88779ff6943dd801eb2dcf6d6097" gracePeriod=600 Mar 18 10:18:51.577955 master-0 kubenswrapper[30420]: I0318 10:18:51.577684 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="thanos-sidecar" containerID="cri-o://7a02ae2649c338a61edde895fd21b3f44e6f25ebc4803ff3f064ad18b3962b9c" gracePeriod=600 Mar 18 10:18:51.577955 master-0 kubenswrapper[30420]: I0318 10:18:51.577790 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy" containerID="cri-o://677c8e0fe1cb41f8869ff6affa1a09ada04455dd6fb0bafbd39b72e228a5bed9" gracePeriod=600 Mar 18 10:18:51.577955 master-0 kubenswrapper[30420]: I0318 10:18:51.577747 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy-web" containerID="cri-o://99ea07cc70b5b202dcc0d5bb6ffbd3c680b98550ccd0e11007931357c0554eb1" gracePeriod=600 Mar 18 10:18:51.577955 master-0 kubenswrapper[30420]: I0318 10:18:51.577944 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy-thanos" containerID="cri-o://e16addfd28e3f5280697035643ff6b4e9e9620e0c0365e8d1b364e4a59da7ee7" gracePeriod=600 Mar 18 10:18:51.578199 master-0 kubenswrapper[30420]: I0318 10:18:51.577866 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="config-reloader" containerID="cri-o://dcf2b1ec05bab2e946c1cab6fd5813ae02216ee988779d859d784f1aefec0d8d" gracePeriod=600 Mar 18 10:18:51.587957 master-0 kubenswrapper[30420]: I0318 10:18:51.586691 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 10:18:51.588851 master-0 kubenswrapper[30420]: I0318 10:18:51.588702 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy" containerID="cri-o://e4236a5bb78301349ee653952bd3cb395f1b39a85f8de46e23e28e77a666e3c7" gracePeriod=120 Mar 18 10:18:51.589213 master-0 kubenswrapper[30420]: I0318 10:18:51.588717 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy-web" containerID="cri-o://b4681dda832d085e385da86341cb24481abad420db2010ec43eb55f255a7bff3" gracePeriod=120 Mar 18 10:18:51.589213 master-0 kubenswrapper[30420]: I0318 10:18:51.589092 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="prom-label-proxy" containerID="cri-o://e9eba945cae2ffe1611d676653f381605ef3bc3f8ae1008a52eab79b2e860df4" gracePeriod=120 Mar 18 10:18:51.589659 master-0 kubenswrapper[30420]: I0318 10:18:51.589194 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy-metric" containerID="cri-o://3e10f8a078a0a63498335680d1ef4600429c447d0025a7efed9dc9c399363a43" gracePeriod=120 Mar 18 10:18:51.589659 master-0 kubenswrapper[30420]: I0318 10:18:51.589311 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="config-reloader" containerID="cri-o://b3e3abcb3eed9e3a76ccda69fb88c81863a9b6023fc0895bac0a49ac23f0964d" gracePeriod=120 Mar 18 10:18:51.589915 master-0 kubenswrapper[30420]: I0318 10:18:51.588658 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="alertmanager" containerID="cri-o://6f0a607d3d4ed38bb00f164af6e11bcd0b44d7197e12694411a958ee8be276f5" gracePeriod=120 Mar 18 10:18:52.476002 master-0 kubenswrapper[30420]: I0318 10:18:52.475934 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6659f98f4-ccs7g"] Mar 18 10:18:52.476786 master-0 kubenswrapper[30420]: E0318 10:18:52.476233 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" containerName="metrics-server" Mar 18 10:18:52.476786 master-0 kubenswrapper[30420]: I0318 10:18:52.476247 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" containerName="metrics-server" Mar 18 10:18:52.476786 master-0 kubenswrapper[30420]: E0318 10:18:52.476281 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" containerName="console" Mar 18 10:18:52.476786 master-0 kubenswrapper[30420]: I0318 10:18:52.476287 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" containerName="console" Mar 18 10:18:52.476786 master-0 kubenswrapper[30420]: E0318 10:18:52.476301 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be13c7e-ab8c-43a4-ad8e-4ef8fd233348" containerName="installer" Mar 18 10:18:52.476786 master-0 kubenswrapper[30420]: I0318 10:18:52.476309 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be13c7e-ab8c-43a4-ad8e-4ef8fd233348" containerName="installer" Mar 18 10:18:52.476786 master-0 kubenswrapper[30420]: I0318 10:18:52.476447 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae2feb1-eff5-4ab6-a6e8-df10ff8c6985" containerName="console" Mar 18 10:18:52.476786 master-0 kubenswrapper[30420]: I0318 10:18:52.476458 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be13c7e-ab8c-43a4-ad8e-4ef8fd233348" containerName="installer" Mar 18 10:18:52.476786 master-0 kubenswrapper[30420]: I0318 10:18:52.476469 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="106fc2a2-9e7b-4f86-94b8-b1a1906646d8" containerName="metrics-server" Mar 18 10:18:52.477243 master-0 kubenswrapper[30420]: I0318 10:18:52.476974 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.484872 master-0 kubenswrapper[30420]: I0318 10:18:52.484809 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57d6b5b44-hc2hr"] Mar 18 10:18:52.510298 master-0 kubenswrapper[30420]: I0318 10:18:52.509085 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6659f98f4-ccs7g"] Mar 18 10:18:52.577919 master-0 kubenswrapper[30420]: I0318 10:18:52.573158 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-service-ca\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.577919 master-0 kubenswrapper[30420]: I0318 10:18:52.573317 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-console-config\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.577919 master-0 kubenswrapper[30420]: I0318 10:18:52.573345 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-oauth-serving-cert\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.577919 master-0 kubenswrapper[30420]: I0318 10:18:52.573368 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-oauth-config\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.577919 master-0 kubenswrapper[30420]: I0318 10:18:52.573404 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-trusted-ca-bundle\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.577919 master-0 kubenswrapper[30420]: I0318 10:18:52.573438 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28tgw\" (UniqueName: \"kubernetes.io/projected/0a3e75ac-917b-4aff-a146-89f408145ec5-kube-api-access-28tgw\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.577919 master-0 kubenswrapper[30420]: I0318 10:18:52.573481 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-serving-cert\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.679595 master-0 kubenswrapper[30420]: I0318 10:18:52.679507 30420 generic.go:334] "Generic (PLEG): container finished" podID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerID="e9eba945cae2ffe1611d676653f381605ef3bc3f8ae1008a52eab79b2e860df4" exitCode=0 Mar 18 10:18:52.679595 master-0 kubenswrapper[30420]: I0318 10:18:52.679543 30420 generic.go:334] "Generic (PLEG): container finished" podID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerID="3e10f8a078a0a63498335680d1ef4600429c447d0025a7efed9dc9c399363a43" exitCode=0 Mar 18 10:18:52.679595 master-0 kubenswrapper[30420]: I0318 10:18:52.679552 30420 generic.go:334] "Generic (PLEG): container finished" podID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerID="e4236a5bb78301349ee653952bd3cb395f1b39a85f8de46e23e28e77a666e3c7" exitCode=0 Mar 18 10:18:52.679595 master-0 kubenswrapper[30420]: I0318 10:18:52.679558 30420 generic.go:334] "Generic (PLEG): container finished" podID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerID="b4681dda832d085e385da86341cb24481abad420db2010ec43eb55f255a7bff3" exitCode=0 Mar 18 10:18:52.679595 master-0 kubenswrapper[30420]: I0318 10:18:52.679565 30420 generic.go:334] "Generic (PLEG): container finished" podID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerID="b3e3abcb3eed9e3a76ccda69fb88c81863a9b6023fc0895bac0a49ac23f0964d" exitCode=0 Mar 18 10:18:52.679595 master-0 kubenswrapper[30420]: I0318 10:18:52.679572 30420 generic.go:334] "Generic (PLEG): container finished" podID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerID="6f0a607d3d4ed38bb00f164af6e11bcd0b44d7197e12694411a958ee8be276f5" exitCode=0 Mar 18 10:18:52.679595 master-0 kubenswrapper[30420]: I0318 10:18:52.679615 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerDied","Data":"e9eba945cae2ffe1611d676653f381605ef3bc3f8ae1008a52eab79b2e860df4"} Mar 18 10:18:52.680046 master-0 kubenswrapper[30420]: I0318 10:18:52.679642 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerDied","Data":"3e10f8a078a0a63498335680d1ef4600429c447d0025a7efed9dc9c399363a43"} Mar 18 10:18:52.680046 master-0 kubenswrapper[30420]: I0318 10:18:52.679653 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerDied","Data":"e4236a5bb78301349ee653952bd3cb395f1b39a85f8de46e23e28e77a666e3c7"} Mar 18 10:18:52.680046 master-0 kubenswrapper[30420]: I0318 10:18:52.679664 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerDied","Data":"b4681dda832d085e385da86341cb24481abad420db2010ec43eb55f255a7bff3"} Mar 18 10:18:52.680046 master-0 kubenswrapper[30420]: I0318 10:18:52.679674 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerDied","Data":"b3e3abcb3eed9e3a76ccda69fb88c81863a9b6023fc0895bac0a49ac23f0964d"} Mar 18 10:18:52.680046 master-0 kubenswrapper[30420]: I0318 10:18:52.679684 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerDied","Data":"6f0a607d3d4ed38bb00f164af6e11bcd0b44d7197e12694411a958ee8be276f5"} Mar 18 10:18:52.681197 master-0 kubenswrapper[30420]: I0318 10:18:52.680963 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-console-config\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.681197 master-0 kubenswrapper[30420]: I0318 10:18:52.681012 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-oauth-serving-cert\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.681197 master-0 kubenswrapper[30420]: I0318 10:18:52.681039 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-oauth-config\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.681197 master-0 kubenswrapper[30420]: I0318 10:18:52.681063 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-trusted-ca-bundle\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.681197 master-0 kubenswrapper[30420]: I0318 10:18:52.681097 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28tgw\" (UniqueName: \"kubernetes.io/projected/0a3e75ac-917b-4aff-a146-89f408145ec5-kube-api-access-28tgw\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.681197 master-0 kubenswrapper[30420]: I0318 10:18:52.681131 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-serving-cert\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.681497 master-0 kubenswrapper[30420]: I0318 10:18:52.681202 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-service-ca\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.682227 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-service-ca\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.683954 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-console-config\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.685889 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-trusted-ca-bundle\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.686225 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-oauth-serving-cert\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.686911 30420 generic.go:334] "Generic (PLEG): container finished" podID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerID="e16addfd28e3f5280697035643ff6b4e9e9620e0c0365e8d1b364e4a59da7ee7" exitCode=0 Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.686932 30420 generic.go:334] "Generic (PLEG): container finished" podID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerID="677c8e0fe1cb41f8869ff6affa1a09ada04455dd6fb0bafbd39b72e228a5bed9" exitCode=0 Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.686939 30420 generic.go:334] "Generic (PLEG): container finished" podID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerID="99ea07cc70b5b202dcc0d5bb6ffbd3c680b98550ccd0e11007931357c0554eb1" exitCode=0 Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.686947 30420 generic.go:334] "Generic (PLEG): container finished" podID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerID="7a02ae2649c338a61edde895fd21b3f44e6f25ebc4803ff3f064ad18b3962b9c" exitCode=0 Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.686954 30420 generic.go:334] "Generic (PLEG): container finished" podID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerID="dcf2b1ec05bab2e946c1cab6fd5813ae02216ee988779d859d784f1aefec0d8d" exitCode=0 Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.686962 30420 generic.go:334] "Generic (PLEG): container finished" podID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerID="a7d7f851f4c1584aa500215adc79e22cec4d88779ff6943dd801eb2dcf6d6097" exitCode=0 Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.686981 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerDied","Data":"e16addfd28e3f5280697035643ff6b4e9e9620e0c0365e8d1b364e4a59da7ee7"} Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.687008 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerDied","Data":"677c8e0fe1cb41f8869ff6affa1a09ada04455dd6fb0bafbd39b72e228a5bed9"} Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.687022 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerDied","Data":"99ea07cc70b5b202dcc0d5bb6ffbd3c680b98550ccd0e11007931357c0554eb1"} Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.687036 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerDied","Data":"7a02ae2649c338a61edde895fd21b3f44e6f25ebc4803ff3f064ad18b3962b9c"} Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.687048 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerDied","Data":"dcf2b1ec05bab2e946c1cab6fd5813ae02216ee988779d859d784f1aefec0d8d"} Mar 18 10:18:52.691087 master-0 kubenswrapper[30420]: I0318 10:18:52.687058 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerDied","Data":"a7d7f851f4c1584aa500215adc79e22cec4d88779ff6943dd801eb2dcf6d6097"} Mar 18 10:18:52.695003 master-0 kubenswrapper[30420]: I0318 10:18:52.694251 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-oauth-config\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.700268 master-0 kubenswrapper[30420]: I0318 10:18:52.698613 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-serving-cert\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.714166 master-0 kubenswrapper[30420]: I0318 10:18:52.714126 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28tgw\" (UniqueName: \"kubernetes.io/projected/0a3e75ac-917b-4aff-a146-89f408145ec5-kube-api-access-28tgw\") pod \"console-6659f98f4-ccs7g\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.852537 master-0 kubenswrapper[30420]: I0318 10:18:52.852459 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:18:52.938895 master-0 kubenswrapper[30420]: I0318 10:18:52.938772 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:52.945763 master-0 kubenswrapper[30420]: I0318 10:18:52.945100 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:52.994843 master-0 kubenswrapper[30420]: I0318 10:18:52.994762 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config-out\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995051 master-0 kubenswrapper[30420]: I0318 10:18:52.994901 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-metric\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995051 master-0 kubenswrapper[30420]: I0318 10:18:52.994987 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995051 master-0 kubenswrapper[30420]: I0318 10:18:52.995029 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95sh2\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-kube-api-access-95sh2\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995164 master-0 kubenswrapper[30420]: I0318 10:18:52.995055 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-db\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995164 master-0 kubenswrapper[30420]: I0318 10:18:52.995087 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-rulefiles-0\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995164 master-0 kubenswrapper[30420]: I0318 10:18:52.995112 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-grpc-tls\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995164 master-0 kubenswrapper[30420]: I0318 10:18:52.995134 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-web-config\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995337 master-0 kubenswrapper[30420]: I0318 10:18:52.995172 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995337 master-0 kubenswrapper[30420]: I0318 10:18:52.995240 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-out\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995337 master-0 kubenswrapper[30420]: I0318 10:18:52.995274 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-web-config\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995337 master-0 kubenswrapper[30420]: I0318 10:18:52.995320 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-volume\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995459 master-0 kubenswrapper[30420]: I0318 10:18:52.995348 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-metrics-client-certs\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995459 master-0 kubenswrapper[30420]: I0318 10:18:52.995401 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-thanos-prometheus-http-client-file\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995459 master-0 kubenswrapper[30420]: I0318 10:18:52.995431 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjjcq\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-kube-api-access-vjjcq\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995459 master-0 kubenswrapper[30420]: I0318 10:18:52.995456 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-trusted-ca-bundle\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995572 master-0 kubenswrapper[30420]: I0318 10:18:52.995481 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-web\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995572 master-0 kubenswrapper[30420]: I0318 10:18:52.995504 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-main-db\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995572 master-0 kubenswrapper[30420]: I0318 10:18:52.995539 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-kube-rbac-proxy\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995572 master-0 kubenswrapper[30420]: I0318 10:18:52.995564 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995685 master-0 kubenswrapper[30420]: I0318 10:18:52.995589 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-tls-assets\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995685 master-0 kubenswrapper[30420]: I0318 10:18:52.995616 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-metrics-client-ca\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995685 master-0 kubenswrapper[30420]: I0318 10:18:52.995637 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-tls-assets\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995774 master-0 kubenswrapper[30420]: I0318 10:18:52.995691 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-metrics-client-ca\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995774 master-0 kubenswrapper[30420]: I0318 10:18:52.995715 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-kubelet-serving-ca-bundle\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995774 master-0 kubenswrapper[30420]: I0318 10:18:52.995745 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-trusted-ca-bundle\") pod \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\" (UID: \"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761\") " Mar 18 10:18:52.995774 master-0 kubenswrapper[30420]: I0318 10:18:52.995766 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995916 master-0 kubenswrapper[30420]: I0318 10:18:52.995793 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995916 master-0 kubenswrapper[30420]: I0318 10:18:52.995895 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-serving-certs-ca-bundle\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.995977 master-0 kubenswrapper[30420]: I0318 10:18:52.995924 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") pod \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\" (UID: \"82595633-1fc3-4dc7-a5bc-ce391c4d743d\") " Mar 18 10:18:52.996911 master-0 kubenswrapper[30420]: I0318 10:18:52.996862 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:18:52.997043 master-0 kubenswrapper[30420]: I0318 10:18:52.997013 30420 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:52.997256 master-0 kubenswrapper[30420]: I0318 10:18:52.997225 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:18:52.998116 master-0 kubenswrapper[30420]: I0318 10:18:52.998088 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:18:52.998345 master-0 kubenswrapper[30420]: I0318 10:18:52.998315 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:18:53.000167 master-0 kubenswrapper[30420]: I0318 10:18:52.998680 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:18:53.000167 master-0 kubenswrapper[30420]: I0318 10:18:52.998928 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.000167 master-0 kubenswrapper[30420]: I0318 10:18:52.998969 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-volume" (OuterVolumeSpecName: "config-volume") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.000167 master-0 kubenswrapper[30420]: I0318 10:18:53.000106 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:18:53.000668 master-0 kubenswrapper[30420]: I0318 10:18:53.000558 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:18:53.001158 master-0 kubenswrapper[30420]: I0318 10:18:53.000909 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:18:53.001158 master-0 kubenswrapper[30420]: I0318 10:18:53.001015 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:18:53.001555 master-0 kubenswrapper[30420]: I0318 10:18:53.001524 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.003172 master-0 kubenswrapper[30420]: I0318 10:18:53.002789 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.003172 master-0 kubenswrapper[30420]: I0318 10:18:53.002995 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.003637 master-0 kubenswrapper[30420]: I0318 10:18:53.003595 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-kube-api-access-vjjcq" (OuterVolumeSpecName: "kube-api-access-vjjcq") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "kube-api-access-vjjcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:18:53.005024 master-0 kubenswrapper[30420]: I0318 10:18:53.004781 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.005024 master-0 kubenswrapper[30420]: I0318 10:18:53.004887 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.005024 master-0 kubenswrapper[30420]: I0318 10:18:53.004904 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:18:53.005024 master-0 kubenswrapper[30420]: I0318 10:18:53.004913 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.005024 master-0 kubenswrapper[30420]: I0318 10:18:53.004944 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config" (OuterVolumeSpecName: "config") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.005201 master-0 kubenswrapper[30420]: I0318 10:18:53.005046 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-out" (OuterVolumeSpecName: "config-out") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:18:53.005483 master-0 kubenswrapper[30420]: I0318 10:18:53.005420 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.006485 master-0 kubenswrapper[30420]: I0318 10:18:53.006438 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:18:53.006631 master-0 kubenswrapper[30420]: I0318 10:18:53.006593 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config-out" (OuterVolumeSpecName: "config-out") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:18:53.007639 master-0 kubenswrapper[30420]: I0318 10:18:53.006850 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-kube-api-access-95sh2" (OuterVolumeSpecName: "kube-api-access-95sh2") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "kube-api-access-95sh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:18:53.015552 master-0 kubenswrapper[30420]: I0318 10:18:53.015470 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.017294 master-0 kubenswrapper[30420]: I0318 10:18:53.017230 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.022867 master-0 kubenswrapper[30420]: I0318 10:18:53.022373 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.078276 master-0 kubenswrapper[30420]: I0318 10:18:53.078214 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-web-config" (OuterVolumeSpecName: "web-config") pod "82595633-1fc3-4dc7-a5bc-ce391c4d743d" (UID: "82595633-1fc3-4dc7-a5bc-ce391c4d743d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.094203 master-0 kubenswrapper[30420]: I0318 10:18:53.094139 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-web-config" (OuterVolumeSpecName: "web-config") pod "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" (UID: "9adfdd99-ef2a-4698-8ef5-c2f97c4b6761"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099053 30420 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config-out\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099101 30420 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099119 30420 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099134 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95sh2\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-kube-api-access-95sh2\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099148 30420 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099159 30420 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099171 30420 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099185 30420 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-web-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099197 30420 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099211 30420 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-out\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099224 30420 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-web-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099235 30420 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-config-volume\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099246 30420 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099259 30420 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099272 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjjcq\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-kube-api-access-vjjcq\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099284 30420 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099296 30420 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099309 30420 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099323 30420 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099334 30420 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099348 30420 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/82595633-1fc3-4dc7-a5bc-ce391c4d743d-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099359 30420 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099372 30420 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099383 30420 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099397 30420 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099420 30420 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099431 30420 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82595633-1fc3-4dc7-a5bc-ce391c4d743d-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099445 30420 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.102341 master-0 kubenswrapper[30420]: I0318 10:18:53.099458 30420 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/82595633-1fc3-4dc7-a5bc-ce391c4d743d-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 10:18:53.390586 master-0 kubenswrapper[30420]: I0318 10:18:53.390520 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6659f98f4-ccs7g"] Mar 18 10:18:53.394463 master-0 kubenswrapper[30420]: W0318 10:18:53.394404 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a3e75ac_917b_4aff_a146_89f408145ec5.slice/crio-524fcd697476dc8aeba9f98e2e08153b100d3c8cfe6a938c437df38a2198027c WatchSource:0}: Error finding container 524fcd697476dc8aeba9f98e2e08153b100d3c8cfe6a938c437df38a2198027c: Status 404 returned error can't find the container with id 524fcd697476dc8aeba9f98e2e08153b100d3c8cfe6a938c437df38a2198027c Mar 18 10:18:53.698478 master-0 kubenswrapper[30420]: I0318 10:18:53.698345 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9adfdd99-ef2a-4698-8ef5-c2f97c4b6761","Type":"ContainerDied","Data":"c0cbc9d3c69c7e7e7f33f0b0ddf267e0bec1122a1c08a1c35a8479db1d68b27c"} Mar 18 10:18:53.698478 master-0 kubenswrapper[30420]: I0318 10:18:53.698407 30420 scope.go:117] "RemoveContainer" containerID="e9eba945cae2ffe1611d676653f381605ef3bc3f8ae1008a52eab79b2e860df4" Mar 18 10:18:53.699160 master-0 kubenswrapper[30420]: I0318 10:18:53.698507 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:53.710941 master-0 kubenswrapper[30420]: I0318 10:18:53.710896 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"82595633-1fc3-4dc7-a5bc-ce391c4d743d","Type":"ContainerDied","Data":"a769b1ef229aa220d3080039e58ca071fec3584c0e7796349294fecf9958f89d"} Mar 18 10:18:53.711105 master-0 kubenswrapper[30420]: I0318 10:18:53.711066 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:53.717496 master-0 kubenswrapper[30420]: I0318 10:18:53.717448 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6659f98f4-ccs7g" event={"ID":"0a3e75ac-917b-4aff-a146-89f408145ec5","Type":"ContainerStarted","Data":"8c94dbc385994221c233de438ce49c36b013a2b5464bffabec141ab24ec18a6e"} Mar 18 10:18:53.717567 master-0 kubenswrapper[30420]: I0318 10:18:53.717493 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6659f98f4-ccs7g" event={"ID":"0a3e75ac-917b-4aff-a146-89f408145ec5","Type":"ContainerStarted","Data":"524fcd697476dc8aeba9f98e2e08153b100d3c8cfe6a938c437df38a2198027c"} Mar 18 10:18:53.742510 master-0 kubenswrapper[30420]: I0318 10:18:53.742422 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6659f98f4-ccs7g" podStartSLOduration=1.742401029 podStartE2EDuration="1.742401029s" podCreationTimestamp="2026-03-18 10:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:18:53.740518651 +0000 UTC m=+497.793264580" watchObservedRunningTime="2026-03-18 10:18:53.742401029 +0000 UTC m=+497.795146958" Mar 18 10:18:53.776028 master-0 kubenswrapper[30420]: I0318 10:18:53.775919 30420 scope.go:117] "RemoveContainer" containerID="3e10f8a078a0a63498335680d1ef4600429c447d0025a7efed9dc9c399363a43" Mar 18 10:18:53.801892 master-0 kubenswrapper[30420]: I0318 10:18:53.801807 30420 scope.go:117] "RemoveContainer" containerID="e4236a5bb78301349ee653952bd3cb395f1b39a85f8de46e23e28e77a666e3c7" Mar 18 10:18:53.802471 master-0 kubenswrapper[30420]: I0318 10:18:53.802427 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 10:18:53.818888 master-0 kubenswrapper[30420]: I0318 10:18:53.818850 30420 scope.go:117] "RemoveContainer" containerID="b4681dda832d085e385da86341cb24481abad420db2010ec43eb55f255a7bff3" Mar 18 10:18:53.834113 master-0 kubenswrapper[30420]: I0318 10:18:53.834003 30420 scope.go:117] "RemoveContainer" containerID="b3e3abcb3eed9e3a76ccda69fb88c81863a9b6023fc0895bac0a49ac23f0964d" Mar 18 10:18:53.838223 master-0 kubenswrapper[30420]: I0318 10:18:53.838178 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 10:18:53.847730 master-0 kubenswrapper[30420]: I0318 10:18:53.847698 30420 scope.go:117] "RemoveContainer" containerID="6f0a607d3d4ed38bb00f164af6e11bcd0b44d7197e12694411a958ee8be276f5" Mar 18 10:18:53.863735 master-0 kubenswrapper[30420]: I0318 10:18:53.863689 30420 scope.go:117] "RemoveContainer" containerID="c67af3281f67d00b051eebcba127eebcaf44b1264ab590764fd8664d9451e0a7" Mar 18 10:18:53.879698 master-0 kubenswrapper[30420]: I0318 10:18:53.879664 30420 scope.go:117] "RemoveContainer" containerID="e16addfd28e3f5280697035643ff6b4e9e9620e0c0365e8d1b364e4a59da7ee7" Mar 18 10:18:53.907656 master-0 kubenswrapper[30420]: I0318 10:18:53.904053 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 10:18:53.909499 master-0 kubenswrapper[30420]: I0318 10:18:53.908777 30420 scope.go:117] "RemoveContainer" containerID="677c8e0fe1cb41f8869ff6affa1a09ada04455dd6fb0bafbd39b72e228a5bed9" Mar 18 10:18:53.921499 master-0 kubenswrapper[30420]: I0318 10:18:53.921219 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 10:18:53.930543 master-0 kubenswrapper[30420]: I0318 10:18:53.930490 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930738 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="config-reloader" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930754 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="config-reloader" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930775 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="thanos-sidecar" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930780 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="thanos-sidecar" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930790 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy-web" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930798 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy-web" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930810 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="init-config-reloader" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930816 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="init-config-reloader" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930841 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="prom-label-proxy" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930847 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="prom-label-proxy" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930859 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="alertmanager" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930865 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="alertmanager" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930875 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930881 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930894 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy-metric" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930900 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy-metric" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930914 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="config-reloader" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930920 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="config-reloader" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930930 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="init-config-reloader" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930935 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="init-config-reloader" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930952 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy-thanos" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930958 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy-thanos" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930970 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930976 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.930989 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy-web" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.930996 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy-web" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: E0318 10:18:53.931005 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="prometheus" Mar 18 10:18:53.931022 master-0 kubenswrapper[30420]: I0318 10:18:53.931011 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="prometheus" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931137 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy-metric" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931160 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="thanos-sidecar" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931166 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="prom-label-proxy" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931180 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy-web" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931188 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="alertmanager" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931203 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="kube-rbac-proxy" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931212 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy-web" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931220 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="prometheus" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931232 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" containerName="config-reloader" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931241 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy-thanos" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931252 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="config-reloader" Mar 18 10:18:53.932071 master-0 kubenswrapper[30420]: I0318 10:18:53.931260 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" containerName="kube-rbac-proxy" Mar 18 10:18:53.933121 master-0 kubenswrapper[30420]: I0318 10:18:53.933068 30420 scope.go:117] "RemoveContainer" containerID="99ea07cc70b5b202dcc0d5bb6ffbd3c680b98550ccd0e11007931357c0554eb1" Mar 18 10:18:53.934003 master-0 kubenswrapper[30420]: I0318 10:18:53.933979 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:53.936579 master-0 kubenswrapper[30420]: I0318 10:18:53.936548 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 10:18:53.936803 master-0 kubenswrapper[30420]: I0318 10:18:53.936742 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 10:18:53.937155 master-0 kubenswrapper[30420]: I0318 10:18:53.937123 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 10:18:53.937367 master-0 kubenswrapper[30420]: I0318 10:18:53.937220 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 10:18:53.944263 master-0 kubenswrapper[30420]: I0318 10:18:53.944210 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 10:18:53.944369 master-0 kubenswrapper[30420]: I0318 10:18:53.944284 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 10:18:53.945198 master-0 kubenswrapper[30420]: I0318 10:18:53.945149 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 10:18:53.953957 master-0 kubenswrapper[30420]: I0318 10:18:53.953913 30420 scope.go:117] "RemoveContainer" containerID="7a02ae2649c338a61edde895fd21b3f44e6f25ebc4803ff3f064ad18b3962b9c" Mar 18 10:18:53.958743 master-0 kubenswrapper[30420]: I0318 10:18:53.958703 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 10:18:53.976579 master-0 kubenswrapper[30420]: I0318 10:18:53.976273 30420 scope.go:117] "RemoveContainer" containerID="dcf2b1ec05bab2e946c1cab6fd5813ae02216ee988779d859d784f1aefec0d8d" Mar 18 10:18:53.986061 master-0 kubenswrapper[30420]: I0318 10:18:53.986013 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 10:18:53.991390 master-0 kubenswrapper[30420]: I0318 10:18:53.991339 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:53.996224 master-0 kubenswrapper[30420]: I0318 10:18:53.996156 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 10:18:54.001675 master-0 kubenswrapper[30420]: I0318 10:18:54.001615 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 10:18:54.002513 master-0 kubenswrapper[30420]: I0318 10:18:54.001815 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 10:18:54.002513 master-0 kubenswrapper[30420]: I0318 10:18:54.001882 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 10:18:54.002513 master-0 kubenswrapper[30420]: I0318 10:18:54.001944 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 10:18:54.002513 master-0 kubenswrapper[30420]: I0318 10:18:54.001985 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 10:18:54.002513 master-0 kubenswrapper[30420]: I0318 10:18:54.002054 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 10:18:54.002513 master-0 kubenswrapper[30420]: I0318 10:18:54.002075 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-cjuqtgluoqmcm" Mar 18 10:18:54.002513 master-0 kubenswrapper[30420]: I0318 10:18:54.002141 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 10:18:54.002513 master-0 kubenswrapper[30420]: I0318 10:18:54.002195 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 10:18:54.002906 master-0 kubenswrapper[30420]: I0318 10:18:54.002575 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 10:18:54.007171 master-0 kubenswrapper[30420]: I0318 10:18:54.004305 30420 scope.go:117] "RemoveContainer" containerID="a7d7f851f4c1584aa500215adc79e22cec4d88779ff6943dd801eb2dcf6d6097" Mar 18 10:18:54.007171 master-0 kubenswrapper[30420]: I0318 10:18:54.005177 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 10:18:54.008139 master-0 kubenswrapper[30420]: I0318 10:18:54.008076 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 10:18:54.018715 master-0 kubenswrapper[30420]: I0318 10:18:54.018660 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.018728 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2941d21d-0c38-4037-87ed-ebd188ed5f9f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.018755 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-config\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.018779 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-web-config\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.018840 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.018866 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c4abc917-fc2d-4957-9270-86bb310ecf75-config-out\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.018917 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.018958 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.018991 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019020 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019055 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c4abc917-fc2d-4957-9270-86bb310ecf75-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019087 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmkdp\" (UniqueName: \"kubernetes.io/projected/c4abc917-fc2d-4957-9270-86bb310ecf75-kube-api-access-pmkdp\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019112 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019138 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019182 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019207 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019237 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-config-volume\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019258 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019295 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4abc917-fc2d-4957-9270-86bb310ecf75-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019322 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019379 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019419 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9dxd\" (UniqueName: \"kubernetes.io/projected/2941d21d-0c38-4037-87ed-ebd188ed5f9f-kube-api-access-k9dxd\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019441 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c4abc917-fc2d-4957-9270-86bb310ecf75-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.019494 master-0 kubenswrapper[30420]: I0318 10:18:54.019486 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c4abc917-fc2d-4957-9270-86bb310ecf75-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.020499 master-0 kubenswrapper[30420]: I0318 10:18:54.019518 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-web-config\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.020499 master-0 kubenswrapper[30420]: I0318 10:18:54.019548 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.020499 master-0 kubenswrapper[30420]: I0318 10:18:54.019584 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2941d21d-0c38-4037-87ed-ebd188ed5f9f-config-out\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.020499 master-0 kubenswrapper[30420]: I0318 10:18:54.019608 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.020499 master-0 kubenswrapper[30420]: I0318 10:18:54.019632 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2941d21d-0c38-4037-87ed-ebd188ed5f9f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.020499 master-0 kubenswrapper[30420]: I0318 10:18:54.019658 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.039840 master-0 kubenswrapper[30420]: I0318 10:18:54.039760 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 10:18:54.062647 master-0 kubenswrapper[30420]: I0318 10:18:54.062600 30420 scope.go:117] "RemoveContainer" containerID="b10262cb013c2d9967201332c3435cebf34f30ce3594cb1f99a512f176d9e38c" Mar 18 10:18:54.121393 master-0 kubenswrapper[30420]: I0318 10:18:54.120979 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121393 master-0 kubenswrapper[30420]: I0318 10:18:54.121039 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9dxd\" (UniqueName: \"kubernetes.io/projected/2941d21d-0c38-4037-87ed-ebd188ed5f9f-kube-api-access-k9dxd\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121393 master-0 kubenswrapper[30420]: I0318 10:18:54.121174 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c4abc917-fc2d-4957-9270-86bb310ecf75-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.121393 master-0 kubenswrapper[30420]: I0318 10:18:54.121195 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c4abc917-fc2d-4957-9270-86bb310ecf75-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.121393 master-0 kubenswrapper[30420]: I0318 10:18:54.121302 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-web-config\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121415 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121514 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2941d21d-0c38-4037-87ed-ebd188ed5f9f-config-out\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121547 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121571 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2941d21d-0c38-4037-87ed-ebd188ed5f9f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121603 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121634 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121677 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2941d21d-0c38-4037-87ed-ebd188ed5f9f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121705 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-config\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121736 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-web-config\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121760 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121769 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c4abc917-fc2d-4957-9270-86bb310ecf75-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121783 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c4abc917-fc2d-4957-9270-86bb310ecf75-config-out\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.121987 master-0 kubenswrapper[30420]: I0318 10:18:54.121912 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.122663 master-0 kubenswrapper[30420]: I0318 10:18:54.122629 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2941d21d-0c38-4037-87ed-ebd188ed5f9f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.123482 master-0 kubenswrapper[30420]: I0318 10:18:54.122752 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.123670 master-0 kubenswrapper[30420]: I0318 10:18:54.123643 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.123802 master-0 kubenswrapper[30420]: I0318 10:18:54.123765 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.123942 master-0 kubenswrapper[30420]: I0318 10:18:54.123786 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.124089 master-0 kubenswrapper[30420]: I0318 10:18:54.124071 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c4abc917-fc2d-4957-9270-86bb310ecf75-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.124233 master-0 kubenswrapper[30420]: I0318 10:18:54.124218 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmkdp\" (UniqueName: \"kubernetes.io/projected/c4abc917-fc2d-4957-9270-86bb310ecf75-kube-api-access-pmkdp\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.124352 master-0 kubenswrapper[30420]: I0318 10:18:54.124337 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.124472 master-0 kubenswrapper[30420]: I0318 10:18:54.124447 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.124554 master-0 kubenswrapper[30420]: I0318 10:18:54.124460 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.124652 master-0 kubenswrapper[30420]: I0318 10:18:54.124634 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.124761 master-0 kubenswrapper[30420]: I0318 10:18:54.124746 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.124909 master-0 kubenswrapper[30420]: I0318 10:18:54.124894 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-config-volume\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.124999 master-0 kubenswrapper[30420]: I0318 10:18:54.124985 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.125177 master-0 kubenswrapper[30420]: I0318 10:18:54.125156 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4abc917-fc2d-4957-9270-86bb310ecf75-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.125289 master-0 kubenswrapper[30420]: I0318 10:18:54.125274 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.125893 master-0 kubenswrapper[30420]: I0318 10:18:54.125857 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.126100 master-0 kubenswrapper[30420]: I0318 10:18:54.126065 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.127524 master-0 kubenswrapper[30420]: I0318 10:18:54.126444 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-web-config\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.127524 master-0 kubenswrapper[30420]: I0318 10:18:54.126705 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c4abc917-fc2d-4957-9270-86bb310ecf75-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.127524 master-0 kubenswrapper[30420]: I0318 10:18:54.123667 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.129869 master-0 kubenswrapper[30420]: I0318 10:18:54.129797 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.130941 master-0 kubenswrapper[30420]: I0318 10:18:54.130798 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4abc917-fc2d-4957-9270-86bb310ecf75-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.131485 master-0 kubenswrapper[30420]: I0318 10:18:54.131418 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.131683 master-0 kubenswrapper[30420]: I0318 10:18:54.131646 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c4abc917-fc2d-4957-9270-86bb310ecf75-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.139662 master-0 kubenswrapper[30420]: I0318 10:18:54.139606 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2941d21d-0c38-4037-87ed-ebd188ed5f9f-config-out\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.140104 master-0 kubenswrapper[30420]: I0318 10:18:54.140052 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.140557 master-0 kubenswrapper[30420]: I0318 10:18:54.140517 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-web-config\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.140627 master-0 kubenswrapper[30420]: I0318 10:18:54.140595 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.140717 master-0 kubenswrapper[30420]: I0318 10:18:54.140681 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-config\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.141335 master-0 kubenswrapper[30420]: I0318 10:18:54.141298 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.141496 master-0 kubenswrapper[30420]: I0318 10:18:54.141460 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.141496 master-0 kubenswrapper[30420]: I0318 10:18:54.141477 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.141721 master-0 kubenswrapper[30420]: I0318 10:18:54.141695 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2941d21d-0c38-4037-87ed-ebd188ed5f9f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.142255 master-0 kubenswrapper[30420]: I0318 10:18:54.142207 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.142847 master-0 kubenswrapper[30420]: I0318 10:18:54.142531 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2941d21d-0c38-4037-87ed-ebd188ed5f9f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.143010 master-0 kubenswrapper[30420]: I0318 10:18:54.131849 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.143462 master-0 kubenswrapper[30420]: I0318 10:18:54.143414 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c4abc917-fc2d-4957-9270-86bb310ecf75-config-volume\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.145516 master-0 kubenswrapper[30420]: I0318 10:18:54.145482 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c4abc917-fc2d-4957-9270-86bb310ecf75-config-out\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.150264 master-0 kubenswrapper[30420]: I0318 10:18:54.150227 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmkdp\" (UniqueName: \"kubernetes.io/projected/c4abc917-fc2d-4957-9270-86bb310ecf75-kube-api-access-pmkdp\") pod \"alertmanager-main-0\" (UID: \"c4abc917-fc2d-4957-9270-86bb310ecf75\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.151113 master-0 kubenswrapper[30420]: I0318 10:18:54.151049 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2941d21d-0c38-4037-87ed-ebd188ed5f9f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.154490 master-0 kubenswrapper[30420]: I0318 10:18:54.154445 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9dxd\" (UniqueName: \"kubernetes.io/projected/2941d21d-0c38-4037-87ed-ebd188ed5f9f-kube-api-access-k9dxd\") pod \"prometheus-k8s-0\" (UID: \"2941d21d-0c38-4037-87ed-ebd188ed5f9f\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.181224 master-0 kubenswrapper[30420]: I0318 10:18:54.181158 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82595633-1fc3-4dc7-a5bc-ce391c4d743d" path="/var/lib/kubelet/pods/82595633-1fc3-4dc7-a5bc-ce391c4d743d/volumes" Mar 18 10:18:54.182390 master-0 kubenswrapper[30420]: I0318 10:18:54.182356 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9adfdd99-ef2a-4698-8ef5-c2f97c4b6761" path="/var/lib/kubelet/pods/9adfdd99-ef2a-4698-8ef5-c2f97c4b6761/volumes" Mar 18 10:18:54.261270 master-0 kubenswrapper[30420]: I0318 10:18:54.261110 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 10:18:54.316665 master-0 kubenswrapper[30420]: I0318 10:18:54.316597 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:18:54.751683 master-0 kubenswrapper[30420]: W0318 10:18:54.751630 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4abc917_fc2d_4957_9270_86bb310ecf75.slice/crio-f7f523b9d653e144ed5e182abd84b590640dc0f717d23c9e5de44c350931c304 WatchSource:0}: Error finding container f7f523b9d653e144ed5e182abd84b590640dc0f717d23c9e5de44c350931c304: Status 404 returned error can't find the container with id f7f523b9d653e144ed5e182abd84b590640dc0f717d23c9e5de44c350931c304 Mar 18 10:18:54.753592 master-0 kubenswrapper[30420]: I0318 10:18:54.753554 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 10:18:54.838655 master-0 kubenswrapper[30420]: I0318 10:18:54.838222 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 10:18:54.847228 master-0 kubenswrapper[30420]: W0318 10:18:54.847138 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2941d21d_0c38_4037_87ed_ebd188ed5f9f.slice/crio-df0989e3e89f36a85c2d50e1a3a050fd300ae42722b4ce5c74ff07615ddf7994 WatchSource:0}: Error finding container df0989e3e89f36a85c2d50e1a3a050fd300ae42722b4ce5c74ff07615ddf7994: Status 404 returned error can't find the container with id df0989e3e89f36a85c2d50e1a3a050fd300ae42722b4ce5c74ff07615ddf7994 Mar 18 10:18:55.735456 master-0 kubenswrapper[30420]: I0318 10:18:55.735402 30420 generic.go:334] "Generic (PLEG): container finished" podID="c4abc917-fc2d-4957-9270-86bb310ecf75" containerID="c775ca6ed36926e73439ceff8de5feb4265068680cf94fc57375edb40ac5a46d" exitCode=0 Mar 18 10:18:55.735679 master-0 kubenswrapper[30420]: I0318 10:18:55.735474 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4abc917-fc2d-4957-9270-86bb310ecf75","Type":"ContainerDied","Data":"c775ca6ed36926e73439ceff8de5feb4265068680cf94fc57375edb40ac5a46d"} Mar 18 10:18:55.735679 master-0 kubenswrapper[30420]: I0318 10:18:55.735516 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4abc917-fc2d-4957-9270-86bb310ecf75","Type":"ContainerStarted","Data":"f7f523b9d653e144ed5e182abd84b590640dc0f717d23c9e5de44c350931c304"} Mar 18 10:18:55.738600 master-0 kubenswrapper[30420]: I0318 10:18:55.738545 30420 generic.go:334] "Generic (PLEG): container finished" podID="2941d21d-0c38-4037-87ed-ebd188ed5f9f" containerID="002a8a405a79e86ed3a28d78a89ca2029fc0aca2a7562ef3a66e65b951116ee0" exitCode=0 Mar 18 10:18:55.738690 master-0 kubenswrapper[30420]: I0318 10:18:55.738592 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2941d21d-0c38-4037-87ed-ebd188ed5f9f","Type":"ContainerDied","Data":"002a8a405a79e86ed3a28d78a89ca2029fc0aca2a7562ef3a66e65b951116ee0"} Mar 18 10:18:55.738690 master-0 kubenswrapper[30420]: I0318 10:18:55.738653 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2941d21d-0c38-4037-87ed-ebd188ed5f9f","Type":"ContainerStarted","Data":"df0989e3e89f36a85c2d50e1a3a050fd300ae42722b4ce5c74ff07615ddf7994"} Mar 18 10:18:56.753261 master-0 kubenswrapper[30420]: I0318 10:18:56.753082 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4abc917-fc2d-4957-9270-86bb310ecf75","Type":"ContainerStarted","Data":"fe13478c8dc552181f89a06d93b59a5ab6f03a5b023eba67d2c0428dcf4e787f"} Mar 18 10:18:56.753261 master-0 kubenswrapper[30420]: I0318 10:18:56.753165 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4abc917-fc2d-4957-9270-86bb310ecf75","Type":"ContainerStarted","Data":"dba109adf92976d2d6af8afc0239b808b66c801d9e18f8e3d1482f0478ff0d54"} Mar 18 10:18:56.753261 master-0 kubenswrapper[30420]: I0318 10:18:56.753178 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4abc917-fc2d-4957-9270-86bb310ecf75","Type":"ContainerStarted","Data":"ad95c02940ae7b913476805174071165724b7adc5c0ab1bb13fc822cb5463fb1"} Mar 18 10:18:56.753261 master-0 kubenswrapper[30420]: I0318 10:18:56.753190 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4abc917-fc2d-4957-9270-86bb310ecf75","Type":"ContainerStarted","Data":"0fc7312fc88855844fe01cb77dbf944550687163b9fd8f3f70f75d79c5fcb549"} Mar 18 10:18:56.753261 master-0 kubenswrapper[30420]: I0318 10:18:56.753201 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4abc917-fc2d-4957-9270-86bb310ecf75","Type":"ContainerStarted","Data":"57adde5f777e1da86d0f0b96e4cbb5658394a81649192cd57de24d244daeebb8"} Mar 18 10:18:56.773316 master-0 kubenswrapper[30420]: I0318 10:18:56.755367 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2941d21d-0c38-4037-87ed-ebd188ed5f9f","Type":"ContainerStarted","Data":"2a43b345d363f80c62d5904fe01d4b65fab6c1a263c6aa87037f4ea788b4f924"} Mar 18 10:18:56.773316 master-0 kubenswrapper[30420]: I0318 10:18:56.755395 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2941d21d-0c38-4037-87ed-ebd188ed5f9f","Type":"ContainerStarted","Data":"3a557b00a9d897598136188398553f67d458d0e03a763197f283b5269878dbc2"} Mar 18 10:18:56.773316 master-0 kubenswrapper[30420]: I0318 10:18:56.755408 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2941d21d-0c38-4037-87ed-ebd188ed5f9f","Type":"ContainerStarted","Data":"b4af7db02431008da474291f01c580cf6f950bd3a741974d76460a2581167e90"} Mar 18 10:18:56.773316 master-0 kubenswrapper[30420]: I0318 10:18:56.755420 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2941d21d-0c38-4037-87ed-ebd188ed5f9f","Type":"ContainerStarted","Data":"27c6815b490736377190b1dc2436c742ac29d23648a9a2cdb7afad3b650a8832"} Mar 18 10:18:57.773649 master-0 kubenswrapper[30420]: I0318 10:18:57.773530 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4abc917-fc2d-4957-9270-86bb310ecf75","Type":"ContainerStarted","Data":"1932ae5796b8b4fd89cb14859beb51ab9856941abcecd74543bed711d09e9adb"} Mar 18 10:18:57.778917 master-0 kubenswrapper[30420]: I0318 10:18:57.778750 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2941d21d-0c38-4037-87ed-ebd188ed5f9f","Type":"ContainerStarted","Data":"bb3e92b588440ec1de4fcc58d0f29d991a0ae8c7bd6ece5412c8554b2f71316a"} Mar 18 10:18:57.778917 master-0 kubenswrapper[30420]: I0318 10:18:57.778865 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2941d21d-0c38-4037-87ed-ebd188ed5f9f","Type":"ContainerStarted","Data":"e9e0f2eaab68e88cb270057c0a80cc987709c53014aeed8865e65af08ba7f2bd"} Mar 18 10:18:57.816592 master-0 kubenswrapper[30420]: I0318 10:18:57.816471 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.816435279 podStartE2EDuration="4.816435279s" podCreationTimestamp="2026-03-18 10:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:18:57.812224673 +0000 UTC m=+501.864970612" watchObservedRunningTime="2026-03-18 10:18:57.816435279 +0000 UTC m=+501.869181248" Mar 18 10:18:57.871236 master-0 kubenswrapper[30420]: I0318 10:18:57.871009 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.870901857 podStartE2EDuration="4.870901857s" podCreationTimestamp="2026-03-18 10:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:18:57.867431579 +0000 UTC m=+501.920177548" watchObservedRunningTime="2026-03-18 10:18:57.870901857 +0000 UTC m=+501.923647806" Mar 18 10:18:59.317090 master-0 kubenswrapper[30420]: I0318 10:18:59.316987 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:19:02.853250 master-0 kubenswrapper[30420]: I0318 10:19:02.853152 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:19:02.853250 master-0 kubenswrapper[30420]: I0318 10:19:02.853237 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:19:02.861850 master-0 kubenswrapper[30420]: I0318 10:19:02.861799 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:19:03.844034 master-0 kubenswrapper[30420]: I0318 10:19:03.843927 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:19:03.945723 master-0 kubenswrapper[30420]: I0318 10:19:03.945671 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-575b5dddfb-mj9qv"] Mar 18 10:19:17.588587 master-0 kubenswrapper[30420]: I0318 10:19:17.588445 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57d6b5b44-hc2hr" podUID="999213fe-0b3a-4231-80be-6cffc474d94d" containerName="console" containerID="cri-o://e6e9aa9d5f7efe6d00474f60585df43e85ee0389c5677d30c1078b03b74a708a" gracePeriod=15 Mar 18 10:19:17.964063 master-0 kubenswrapper[30420]: I0318 10:19:17.963931 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57d6b5b44-hc2hr_999213fe-0b3a-4231-80be-6cffc474d94d/console/0.log" Mar 18 10:19:17.964063 master-0 kubenswrapper[30420]: I0318 10:19:17.964030 30420 generic.go:334] "Generic (PLEG): container finished" podID="999213fe-0b3a-4231-80be-6cffc474d94d" containerID="e6e9aa9d5f7efe6d00474f60585df43e85ee0389c5677d30c1078b03b74a708a" exitCode=2 Mar 18 10:19:17.964363 master-0 kubenswrapper[30420]: I0318 10:19:17.964075 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57d6b5b44-hc2hr" event={"ID":"999213fe-0b3a-4231-80be-6cffc474d94d","Type":"ContainerDied","Data":"e6e9aa9d5f7efe6d00474f60585df43e85ee0389c5677d30c1078b03b74a708a"} Mar 18 10:19:18.208340 master-0 kubenswrapper[30420]: I0318 10:19:18.208196 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57d6b5b44-hc2hr_999213fe-0b3a-4231-80be-6cffc474d94d/console/0.log" Mar 18 10:19:18.208340 master-0 kubenswrapper[30420]: I0318 10:19:18.208292 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:19:18.331411 master-0 kubenswrapper[30420]: I0318 10:19:18.331337 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-console-config\") pod \"999213fe-0b3a-4231-80be-6cffc474d94d\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " Mar 18 10:19:18.331411 master-0 kubenswrapper[30420]: I0318 10:19:18.331396 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-oauth-serving-cert\") pod \"999213fe-0b3a-4231-80be-6cffc474d94d\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " Mar 18 10:19:18.331879 master-0 kubenswrapper[30420]: I0318 10:19:18.331447 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-serving-cert\") pod \"999213fe-0b3a-4231-80be-6cffc474d94d\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " Mar 18 10:19:18.331879 master-0 kubenswrapper[30420]: I0318 10:19:18.331500 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-oauth-config\") pod \"999213fe-0b3a-4231-80be-6cffc474d94d\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " Mar 18 10:19:18.331879 master-0 kubenswrapper[30420]: I0318 10:19:18.331540 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-service-ca\") pod \"999213fe-0b3a-4231-80be-6cffc474d94d\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " Mar 18 10:19:18.331879 master-0 kubenswrapper[30420]: I0318 10:19:18.331572 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxcfc\" (UniqueName: \"kubernetes.io/projected/999213fe-0b3a-4231-80be-6cffc474d94d-kube-api-access-pxcfc\") pod \"999213fe-0b3a-4231-80be-6cffc474d94d\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " Mar 18 10:19:18.331879 master-0 kubenswrapper[30420]: I0318 10:19:18.331606 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-trusted-ca-bundle\") pod \"999213fe-0b3a-4231-80be-6cffc474d94d\" (UID: \"999213fe-0b3a-4231-80be-6cffc474d94d\") " Mar 18 10:19:18.332373 master-0 kubenswrapper[30420]: I0318 10:19:18.332334 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "999213fe-0b3a-4231-80be-6cffc474d94d" (UID: "999213fe-0b3a-4231-80be-6cffc474d94d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:19:18.332923 master-0 kubenswrapper[30420]: I0318 10:19:18.332871 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-service-ca" (OuterVolumeSpecName: "service-ca") pod "999213fe-0b3a-4231-80be-6cffc474d94d" (UID: "999213fe-0b3a-4231-80be-6cffc474d94d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:19:18.333590 master-0 kubenswrapper[30420]: I0318 10:19:18.333507 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-console-config" (OuterVolumeSpecName: "console-config") pod "999213fe-0b3a-4231-80be-6cffc474d94d" (UID: "999213fe-0b3a-4231-80be-6cffc474d94d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:19:18.333590 master-0 kubenswrapper[30420]: I0318 10:19:18.333555 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "999213fe-0b3a-4231-80be-6cffc474d94d" (UID: "999213fe-0b3a-4231-80be-6cffc474d94d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:19:18.335623 master-0 kubenswrapper[30420]: I0318 10:19:18.335569 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/999213fe-0b3a-4231-80be-6cffc474d94d-kube-api-access-pxcfc" (OuterVolumeSpecName: "kube-api-access-pxcfc") pod "999213fe-0b3a-4231-80be-6cffc474d94d" (UID: "999213fe-0b3a-4231-80be-6cffc474d94d"). InnerVolumeSpecName "kube-api-access-pxcfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:19:18.335623 master-0 kubenswrapper[30420]: I0318 10:19:18.335603 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "999213fe-0b3a-4231-80be-6cffc474d94d" (UID: "999213fe-0b3a-4231-80be-6cffc474d94d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:19:18.336633 master-0 kubenswrapper[30420]: I0318 10:19:18.336585 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "999213fe-0b3a-4231-80be-6cffc474d94d" (UID: "999213fe-0b3a-4231-80be-6cffc474d94d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:19:18.433760 master-0 kubenswrapper[30420]: I0318 10:19:18.433691 30420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:18.433760 master-0 kubenswrapper[30420]: I0318 10:19:18.433759 30420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:18.433760 master-0 kubenswrapper[30420]: I0318 10:19:18.433785 30420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:18.434257 master-0 kubenswrapper[30420]: I0318 10:19:18.433803 30420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/999213fe-0b3a-4231-80be-6cffc474d94d-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:18.434257 master-0 kubenswrapper[30420]: I0318 10:19:18.433851 30420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:18.434257 master-0 kubenswrapper[30420]: I0318 10:19:18.433879 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxcfc\" (UniqueName: \"kubernetes.io/projected/999213fe-0b3a-4231-80be-6cffc474d94d-kube-api-access-pxcfc\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:18.434257 master-0 kubenswrapper[30420]: I0318 10:19:18.433901 30420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999213fe-0b3a-4231-80be-6cffc474d94d-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:18.973236 master-0 kubenswrapper[30420]: I0318 10:19:18.973178 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57d6b5b44-hc2hr_999213fe-0b3a-4231-80be-6cffc474d94d/console/0.log" Mar 18 10:19:18.973929 master-0 kubenswrapper[30420]: I0318 10:19:18.973255 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57d6b5b44-hc2hr" event={"ID":"999213fe-0b3a-4231-80be-6cffc474d94d","Type":"ContainerDied","Data":"3748813bf850f4eeca8690362bb861aedda70485194a95ebcb39394a19df1091"} Mar 18 10:19:18.973929 master-0 kubenswrapper[30420]: I0318 10:19:18.973305 30420 scope.go:117] "RemoveContainer" containerID="e6e9aa9d5f7efe6d00474f60585df43e85ee0389c5677d30c1078b03b74a708a" Mar 18 10:19:18.973929 master-0 kubenswrapper[30420]: I0318 10:19:18.973338 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57d6b5b44-hc2hr" Mar 18 10:19:19.216856 master-0 kubenswrapper[30420]: I0318 10:19:19.216727 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57d6b5b44-hc2hr"] Mar 18 10:19:19.240541 master-0 kubenswrapper[30420]: I0318 10:19:19.240321 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57d6b5b44-hc2hr"] Mar 18 10:19:20.182063 master-0 kubenswrapper[30420]: I0318 10:19:20.182015 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="999213fe-0b3a-4231-80be-6cffc474d94d" path="/var/lib/kubelet/pods/999213fe-0b3a-4231-80be-6cffc474d94d/volumes" Mar 18 10:19:29.013314 master-0 kubenswrapper[30420]: I0318 10:19:29.013161 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-575b5dddfb-mj9qv" podUID="cebc7ed6-93ef-46cc-8f8f-246c479bd68a" containerName="console" containerID="cri-o://dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74" gracePeriod=15 Mar 18 10:19:29.601321 master-0 kubenswrapper[30420]: I0318 10:19:29.601271 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-575b5dddfb-mj9qv_cebc7ed6-93ef-46cc-8f8f-246c479bd68a/console/0.log" Mar 18 10:19:29.602090 master-0 kubenswrapper[30420]: I0318 10:19:29.602061 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:19:29.629406 master-0 kubenswrapper[30420]: I0318 10:19:29.629267 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-serving-cert\") pod \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " Mar 18 10:19:29.629406 master-0 kubenswrapper[30420]: I0318 10:19:29.629364 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-oauth-serving-cert\") pod \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " Mar 18 10:19:29.629658 master-0 kubenswrapper[30420]: I0318 10:19:29.629407 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-service-ca\") pod \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " Mar 18 10:19:29.629658 master-0 kubenswrapper[30420]: I0318 10:19:29.629496 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfqhj\" (UniqueName: \"kubernetes.io/projected/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-kube-api-access-kfqhj\") pod \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " Mar 18 10:19:29.629658 master-0 kubenswrapper[30420]: I0318 10:19:29.629528 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-trusted-ca-bundle\") pod \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " Mar 18 10:19:29.629658 master-0 kubenswrapper[30420]: I0318 10:19:29.629649 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-config\") pod \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " Mar 18 10:19:29.629861 master-0 kubenswrapper[30420]: I0318 10:19:29.629680 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-oauth-config\") pod \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\" (UID: \"cebc7ed6-93ef-46cc-8f8f-246c479bd68a\") " Mar 18 10:19:29.629898 master-0 kubenswrapper[30420]: I0318 10:19:29.629869 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cebc7ed6-93ef-46cc-8f8f-246c479bd68a" (UID: "cebc7ed6-93ef-46cc-8f8f-246c479bd68a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:19:29.630580 master-0 kubenswrapper[30420]: I0318 10:19:29.630543 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-config" (OuterVolumeSpecName: "console-config") pod "cebc7ed6-93ef-46cc-8f8f-246c479bd68a" (UID: "cebc7ed6-93ef-46cc-8f8f-246c479bd68a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:19:29.631053 master-0 kubenswrapper[30420]: I0318 10:19:29.630996 30420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:29.631053 master-0 kubenswrapper[30420]: I0318 10:19:29.631049 30420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:29.631170 master-0 kubenswrapper[30420]: I0318 10:19:29.631100 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cebc7ed6-93ef-46cc-8f8f-246c479bd68a" (UID: "cebc7ed6-93ef-46cc-8f8f-246c479bd68a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:19:29.631997 master-0 kubenswrapper[30420]: I0318 10:19:29.631943 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-service-ca" (OuterVolumeSpecName: "service-ca") pod "cebc7ed6-93ef-46cc-8f8f-246c479bd68a" (UID: "cebc7ed6-93ef-46cc-8f8f-246c479bd68a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:19:29.633483 master-0 kubenswrapper[30420]: I0318 10:19:29.633413 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cebc7ed6-93ef-46cc-8f8f-246c479bd68a" (UID: "cebc7ed6-93ef-46cc-8f8f-246c479bd68a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:19:29.633686 master-0 kubenswrapper[30420]: I0318 10:19:29.633660 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-kube-api-access-kfqhj" (OuterVolumeSpecName: "kube-api-access-kfqhj") pod "cebc7ed6-93ef-46cc-8f8f-246c479bd68a" (UID: "cebc7ed6-93ef-46cc-8f8f-246c479bd68a"). InnerVolumeSpecName "kube-api-access-kfqhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:19:29.633917 master-0 kubenswrapper[30420]: I0318 10:19:29.633862 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cebc7ed6-93ef-46cc-8f8f-246c479bd68a" (UID: "cebc7ed6-93ef-46cc-8f8f-246c479bd68a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:19:29.737924 master-0 kubenswrapper[30420]: I0318 10:19:29.732015 30420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:29.737924 master-0 kubenswrapper[30420]: I0318 10:19:29.732069 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfqhj\" (UniqueName: \"kubernetes.io/projected/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-kube-api-access-kfqhj\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:29.737924 master-0 kubenswrapper[30420]: I0318 10:19:29.732088 30420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:29.737924 master-0 kubenswrapper[30420]: I0318 10:19:29.732100 30420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:29.737924 master-0 kubenswrapper[30420]: I0318 10:19:29.732112 30420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cebc7ed6-93ef-46cc-8f8f-246c479bd68a-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:19:30.087206 master-0 kubenswrapper[30420]: I0318 10:19:30.087111 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-575b5dddfb-mj9qv_cebc7ed6-93ef-46cc-8f8f-246c479bd68a/console/0.log" Mar 18 10:19:30.087206 master-0 kubenswrapper[30420]: I0318 10:19:30.087200 30420 generic.go:334] "Generic (PLEG): container finished" podID="cebc7ed6-93ef-46cc-8f8f-246c479bd68a" containerID="dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74" exitCode=2 Mar 18 10:19:30.088163 master-0 kubenswrapper[30420]: I0318 10:19:30.087245 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575b5dddfb-mj9qv" event={"ID":"cebc7ed6-93ef-46cc-8f8f-246c479bd68a","Type":"ContainerDied","Data":"dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74"} Mar 18 10:19:30.088163 master-0 kubenswrapper[30420]: I0318 10:19:30.087301 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-575b5dddfb-mj9qv" event={"ID":"cebc7ed6-93ef-46cc-8f8f-246c479bd68a","Type":"ContainerDied","Data":"26bab77a906fcdde1e299ed503a7b7dbb0a30a002fc00ccf94c5c503777d4cee"} Mar 18 10:19:30.088163 master-0 kubenswrapper[30420]: I0318 10:19:30.087337 30420 scope.go:117] "RemoveContainer" containerID="dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74" Mar 18 10:19:30.088163 master-0 kubenswrapper[30420]: I0318 10:19:30.087369 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-575b5dddfb-mj9qv" Mar 18 10:19:30.113226 master-0 kubenswrapper[30420]: I0318 10:19:30.113168 30420 scope.go:117] "RemoveContainer" containerID="dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74" Mar 18 10:19:30.113730 master-0 kubenswrapper[30420]: E0318 10:19:30.113666 30420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74\": container with ID starting with dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74 not found: ID does not exist" containerID="dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74" Mar 18 10:19:30.113808 master-0 kubenswrapper[30420]: I0318 10:19:30.113744 30420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74"} err="failed to get container status \"dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74\": rpc error: code = NotFound desc = could not find container \"dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74\": container with ID starting with dafda178b37f81dbd6350febafe433b0ced5b2fa98b692355e8c421bde419c74 not found: ID does not exist" Mar 18 10:19:30.236071 master-0 kubenswrapper[30420]: I0318 10:19:30.235996 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-575b5dddfb-mj9qv"] Mar 18 10:19:30.251271 master-0 kubenswrapper[30420]: I0318 10:19:30.251198 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-575b5dddfb-mj9qv"] Mar 18 10:19:32.187745 master-0 kubenswrapper[30420]: I0318 10:19:32.187645 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cebc7ed6-93ef-46cc-8f8f-246c479bd68a" path="/var/lib/kubelet/pods/cebc7ed6-93ef-46cc-8f8f-246c479bd68a/volumes" Mar 18 10:19:54.317889 master-0 kubenswrapper[30420]: I0318 10:19:54.317686 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:19:54.355226 master-0 kubenswrapper[30420]: I0318 10:19:54.355149 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:19:55.363067 master-0 kubenswrapper[30420]: I0318 10:19:55.363019 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 10:20:04.126294 master-0 kubenswrapper[30420]: I0318 10:20:04.126223 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l"] Mar 18 10:20:04.127057 master-0 kubenswrapper[30420]: E0318 10:20:04.126573 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cebc7ed6-93ef-46cc-8f8f-246c479bd68a" containerName="console" Mar 18 10:20:04.127057 master-0 kubenswrapper[30420]: I0318 10:20:04.126590 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="cebc7ed6-93ef-46cc-8f8f-246c479bd68a" containerName="console" Mar 18 10:20:04.127057 master-0 kubenswrapper[30420]: E0318 10:20:04.126635 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999213fe-0b3a-4231-80be-6cffc474d94d" containerName="console" Mar 18 10:20:04.127057 master-0 kubenswrapper[30420]: I0318 10:20:04.126641 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="999213fe-0b3a-4231-80be-6cffc474d94d" containerName="console" Mar 18 10:20:04.127057 master-0 kubenswrapper[30420]: I0318 10:20:04.126796 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="cebc7ed6-93ef-46cc-8f8f-246c479bd68a" containerName="console" Mar 18 10:20:04.127057 master-0 kubenswrapper[30420]: I0318 10:20:04.126894 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="999213fe-0b3a-4231-80be-6cffc474d94d" containerName="console" Mar 18 10:20:04.127992 master-0 kubenswrapper[30420]: I0318 10:20:04.127913 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.131848 master-0 kubenswrapper[30420]: I0318 10:20:04.130347 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-rcsgj" Mar 18 10:20:04.136853 master-0 kubenswrapper[30420]: I0318 10:20:04.134923 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l"] Mar 18 10:20:04.183649 master-0 kubenswrapper[30420]: I0318 10:20:04.183582 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.184098 master-0 kubenswrapper[30420]: I0318 10:20:04.184066 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.184152 master-0 kubenswrapper[30420]: I0318 10:20:04.184121 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9lqj\" (UniqueName: \"kubernetes.io/projected/966006be-32a6-4151-8655-ca0ced34c69a-kube-api-access-x9lqj\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.285368 master-0 kubenswrapper[30420]: I0318 10:20:04.285268 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.285368 master-0 kubenswrapper[30420]: I0318 10:20:04.285335 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9lqj\" (UniqueName: \"kubernetes.io/projected/966006be-32a6-4151-8655-ca0ced34c69a-kube-api-access-x9lqj\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.285884 master-0 kubenswrapper[30420]: I0318 10:20:04.285449 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.286115 master-0 kubenswrapper[30420]: I0318 10:20:04.286063 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.286562 master-0 kubenswrapper[30420]: I0318 10:20:04.286481 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.311919 master-0 kubenswrapper[30420]: I0318 10:20:04.311856 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9lqj\" (UniqueName: \"kubernetes.io/projected/966006be-32a6-4151-8655-ca0ced34c69a-kube-api-access-x9lqj\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.463502 master-0 kubenswrapper[30420]: I0318 10:20:04.463310 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:04.962188 master-0 kubenswrapper[30420]: I0318 10:20:04.962109 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l"] Mar 18 10:20:04.971262 master-0 kubenswrapper[30420]: W0318 10:20:04.971203 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod966006be_32a6_4151_8655_ca0ced34c69a.slice/crio-6eb8bb49852777dc924df6ce1f64909b404b9c8691e06c5e181f3f3ce972265b WatchSource:0}: Error finding container 6eb8bb49852777dc924df6ce1f64909b404b9c8691e06c5e181f3f3ce972265b: Status 404 returned error can't find the container with id 6eb8bb49852777dc924df6ce1f64909b404b9c8691e06c5e181f3f3ce972265b Mar 18 10:20:05.434179 master-0 kubenswrapper[30420]: I0318 10:20:05.434087 30420 generic.go:334] "Generic (PLEG): container finished" podID="966006be-32a6-4151-8655-ca0ced34c69a" containerID="4cfae8b4c54a403ea92b44cdc30cdf5b01713edd8c29e0223f7cd7bef122e4c1" exitCode=0 Mar 18 10:20:05.434759 master-0 kubenswrapper[30420]: I0318 10:20:05.434189 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" event={"ID":"966006be-32a6-4151-8655-ca0ced34c69a","Type":"ContainerDied","Data":"4cfae8b4c54a403ea92b44cdc30cdf5b01713edd8c29e0223f7cd7bef122e4c1"} Mar 18 10:20:05.434759 master-0 kubenswrapper[30420]: I0318 10:20:05.434253 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" event={"ID":"966006be-32a6-4151-8655-ca0ced34c69a","Type":"ContainerStarted","Data":"6eb8bb49852777dc924df6ce1f64909b404b9c8691e06c5e181f3f3ce972265b"} Mar 18 10:20:05.435922 master-0 kubenswrapper[30420]: I0318 10:20:05.435891 30420 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 10:20:07.464588 master-0 kubenswrapper[30420]: I0318 10:20:07.464497 30420 generic.go:334] "Generic (PLEG): container finished" podID="966006be-32a6-4151-8655-ca0ced34c69a" containerID="edd4e494921fb9ea0d7ded9147e075550fda1fd84a881500bb363f5235e0f040" exitCode=0 Mar 18 10:20:07.465482 master-0 kubenswrapper[30420]: I0318 10:20:07.464600 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" event={"ID":"966006be-32a6-4151-8655-ca0ced34c69a","Type":"ContainerDied","Data":"edd4e494921fb9ea0d7ded9147e075550fda1fd84a881500bb363f5235e0f040"} Mar 18 10:20:08.479723 master-0 kubenswrapper[30420]: I0318 10:20:08.479625 30420 generic.go:334] "Generic (PLEG): container finished" podID="966006be-32a6-4151-8655-ca0ced34c69a" containerID="75078c5e9b7da6cfa5a161b2ae5271efa412a97253df9eee890eb14b586a04cc" exitCode=0 Mar 18 10:20:08.480711 master-0 kubenswrapper[30420]: I0318 10:20:08.479728 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" event={"ID":"966006be-32a6-4151-8655-ca0ced34c69a","Type":"ContainerDied","Data":"75078c5e9b7da6cfa5a161b2ae5271efa412a97253df9eee890eb14b586a04cc"} Mar 18 10:20:09.773610 master-0 kubenswrapper[30420]: I0318 10:20:09.773559 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:09.786519 master-0 kubenswrapper[30420]: I0318 10:20:09.786451 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-util\") pod \"966006be-32a6-4151-8655-ca0ced34c69a\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " Mar 18 10:20:09.786809 master-0 kubenswrapper[30420]: I0318 10:20:09.786656 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-bundle\") pod \"966006be-32a6-4151-8655-ca0ced34c69a\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " Mar 18 10:20:09.787008 master-0 kubenswrapper[30420]: I0318 10:20:09.786954 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9lqj\" (UniqueName: \"kubernetes.io/projected/966006be-32a6-4151-8655-ca0ced34c69a-kube-api-access-x9lqj\") pod \"966006be-32a6-4151-8655-ca0ced34c69a\" (UID: \"966006be-32a6-4151-8655-ca0ced34c69a\") " Mar 18 10:20:09.788386 master-0 kubenswrapper[30420]: I0318 10:20:09.788325 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-bundle" (OuterVolumeSpecName: "bundle") pod "966006be-32a6-4151-8655-ca0ced34c69a" (UID: "966006be-32a6-4151-8655-ca0ced34c69a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:09.793718 master-0 kubenswrapper[30420]: I0318 10:20:09.793644 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966006be-32a6-4151-8655-ca0ced34c69a-kube-api-access-x9lqj" (OuterVolumeSpecName: "kube-api-access-x9lqj") pod "966006be-32a6-4151-8655-ca0ced34c69a" (UID: "966006be-32a6-4151-8655-ca0ced34c69a"). InnerVolumeSpecName "kube-api-access-x9lqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:20:09.809679 master-0 kubenswrapper[30420]: I0318 10:20:09.809618 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-util" (OuterVolumeSpecName: "util") pod "966006be-32a6-4151-8655-ca0ced34c69a" (UID: "966006be-32a6-4151-8655-ca0ced34c69a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:09.889775 master-0 kubenswrapper[30420]: I0318 10:20:09.889712 30420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-util\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:09.889775 master-0 kubenswrapper[30420]: I0318 10:20:09.889763 30420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/966006be-32a6-4151-8655-ca0ced34c69a-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:09.889775 master-0 kubenswrapper[30420]: I0318 10:20:09.889785 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9lqj\" (UniqueName: \"kubernetes.io/projected/966006be-32a6-4151-8655-ca0ced34c69a-kube-api-access-x9lqj\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:10.498654 master-0 kubenswrapper[30420]: I0318 10:20:10.498465 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" event={"ID":"966006be-32a6-4151-8655-ca0ced34c69a","Type":"ContainerDied","Data":"6eb8bb49852777dc924df6ce1f64909b404b9c8691e06c5e181f3f3ce972265b"} Mar 18 10:20:10.498654 master-0 kubenswrapper[30420]: I0318 10:20:10.498522 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eb8bb49852777dc924df6ce1f64909b404b9c8691e06c5e181f3f3ce972265b" Mar 18 10:20:10.498654 master-0 kubenswrapper[30420]: I0318 10:20:10.498646 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4pmc6l" Mar 18 10:20:16.971795 master-0 kubenswrapper[30420]: I0318 10:20:16.971730 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-56f66bc554-5vdd5"] Mar 18 10:20:16.972416 master-0 kubenswrapper[30420]: E0318 10:20:16.972012 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966006be-32a6-4151-8655-ca0ced34c69a" containerName="util" Mar 18 10:20:16.972416 master-0 kubenswrapper[30420]: I0318 10:20:16.972024 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="966006be-32a6-4151-8655-ca0ced34c69a" containerName="util" Mar 18 10:20:16.972416 master-0 kubenswrapper[30420]: E0318 10:20:16.972050 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966006be-32a6-4151-8655-ca0ced34c69a" containerName="pull" Mar 18 10:20:16.972416 master-0 kubenswrapper[30420]: I0318 10:20:16.972056 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="966006be-32a6-4151-8655-ca0ced34c69a" containerName="pull" Mar 18 10:20:16.972416 master-0 kubenswrapper[30420]: E0318 10:20:16.972076 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966006be-32a6-4151-8655-ca0ced34c69a" containerName="extract" Mar 18 10:20:16.972416 master-0 kubenswrapper[30420]: I0318 10:20:16.972082 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="966006be-32a6-4151-8655-ca0ced34c69a" containerName="extract" Mar 18 10:20:16.972416 master-0 kubenswrapper[30420]: I0318 10:20:16.972205 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="966006be-32a6-4151-8655-ca0ced34c69a" containerName="extract" Mar 18 10:20:16.972685 master-0 kubenswrapper[30420]: I0318 10:20:16.972662 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:16.985979 master-0 kubenswrapper[30420]: I0318 10:20:16.983131 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 18 10:20:16.985979 master-0 kubenswrapper[30420]: I0318 10:20:16.983397 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 18 10:20:16.985979 master-0 kubenswrapper[30420]: I0318 10:20:16.983591 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 18 10:20:16.985979 master-0 kubenswrapper[30420]: I0318 10:20:16.984529 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 18 10:20:16.985979 master-0 kubenswrapper[30420]: I0318 10:20:16.984715 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 18 10:20:17.021438 master-0 kubenswrapper[30420]: I0318 10:20:17.021369 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-56f66bc554-5vdd5"] Mar 18 10:20:17.026680 master-0 kubenswrapper[30420]: I0318 10:20:17.026627 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-webhook-cert\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.026892 master-0 kubenswrapper[30420]: I0318 10:20:17.026695 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-metrics-cert\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.026892 master-0 kubenswrapper[30420]: I0318 10:20:17.026720 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-apiservice-cert\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.026892 master-0 kubenswrapper[30420]: I0318 10:20:17.026754 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-socket-dir\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.026892 master-0 kubenswrapper[30420]: I0318 10:20:17.026781 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twgtv\" (UniqueName: \"kubernetes.io/projected/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-kube-api-access-twgtv\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.128088 master-0 kubenswrapper[30420]: I0318 10:20:17.128040 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-webhook-cert\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.128430 master-0 kubenswrapper[30420]: I0318 10:20:17.128410 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-metrics-cert\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.128600 master-0 kubenswrapper[30420]: I0318 10:20:17.128581 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-apiservice-cert\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.128754 master-0 kubenswrapper[30420]: I0318 10:20:17.128734 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-socket-dir\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.128929 master-0 kubenswrapper[30420]: I0318 10:20:17.128907 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twgtv\" (UniqueName: \"kubernetes.io/projected/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-kube-api-access-twgtv\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.130565 master-0 kubenswrapper[30420]: I0318 10:20:17.129754 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-socket-dir\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.134777 master-0 kubenswrapper[30420]: I0318 10:20:17.134715 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-metrics-cert\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.135392 master-0 kubenswrapper[30420]: I0318 10:20:17.135367 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-apiservice-cert\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.139882 master-0 kubenswrapper[30420]: I0318 10:20:17.139817 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-webhook-cert\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.149901 master-0 kubenswrapper[30420]: I0318 10:20:17.149720 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twgtv\" (UniqueName: \"kubernetes.io/projected/5bca57b4-b8b9-4298-9f45-1ad27ad0e85f-kube-api-access-twgtv\") pod \"lvms-operator-56f66bc554-5vdd5\" (UID: \"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f\") " pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.299278 master-0 kubenswrapper[30420]: I0318 10:20:17.299208 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:17.755307 master-0 kubenswrapper[30420]: W0318 10:20:17.754254 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bca57b4_b8b9_4298_9f45_1ad27ad0e85f.slice/crio-5991d5bb4284c6f6ec105d5d715f7724365ceb2685a0ed4f71345f435fffcccb WatchSource:0}: Error finding container 5991d5bb4284c6f6ec105d5d715f7724365ceb2685a0ed4f71345f435fffcccb: Status 404 returned error can't find the container with id 5991d5bb4284c6f6ec105d5d715f7724365ceb2685a0ed4f71345f435fffcccb Mar 18 10:20:17.756074 master-0 kubenswrapper[30420]: I0318 10:20:17.756000 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-56f66bc554-5vdd5"] Mar 18 10:20:18.564509 master-0 kubenswrapper[30420]: I0318 10:20:18.564404 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" event={"ID":"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f","Type":"ContainerStarted","Data":"5991d5bb4284c6f6ec105d5d715f7724365ceb2685a0ed4f71345f435fffcccb"} Mar 18 10:20:23.603322 master-0 kubenswrapper[30420]: I0318 10:20:23.603235 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" event={"ID":"5bca57b4-b8b9-4298-9f45-1ad27ad0e85f","Type":"ContainerStarted","Data":"bcf8d3d718399b80beefa208e153025670b16f6c96eeb42d6df58231c7def510"} Mar 18 10:20:23.604269 master-0 kubenswrapper[30420]: I0318 10:20:23.603499 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:23.606821 master-0 kubenswrapper[30420]: I0318 10:20:23.606773 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" Mar 18 10:20:23.636319 master-0 kubenswrapper[30420]: I0318 10:20:23.636210 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-56f66bc554-5vdd5" podStartSLOduration=2.748836175 podStartE2EDuration="7.63618882s" podCreationTimestamp="2026-03-18 10:20:16 +0000 UTC" firstStartedPulling="2026-03-18 10:20:17.757516714 +0000 UTC m=+581.810262633" lastFinishedPulling="2026-03-18 10:20:22.644869349 +0000 UTC m=+586.697615278" observedRunningTime="2026-03-18 10:20:23.629306257 +0000 UTC m=+587.682052176" watchObservedRunningTime="2026-03-18 10:20:23.63618882 +0000 UTC m=+587.688934749" Mar 18 10:20:27.386566 master-0 kubenswrapper[30420]: I0318 10:20:27.386096 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp"] Mar 18 10:20:27.387964 master-0 kubenswrapper[30420]: I0318 10:20:27.387778 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.389979 master-0 kubenswrapper[30420]: I0318 10:20:27.389787 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-rcsgj" Mar 18 10:20:27.409215 master-0 kubenswrapper[30420]: I0318 10:20:27.409169 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crkt9\" (UniqueName: \"kubernetes.io/projected/8d8cbe20-08bf-417a-9a3e-faa63cde3989-kube-api-access-crkt9\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.413948 master-0 kubenswrapper[30420]: I0318 10:20:27.409232 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.413948 master-0 kubenswrapper[30420]: I0318 10:20:27.409280 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.414155 master-0 kubenswrapper[30420]: I0318 10:20:27.414077 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp"] Mar 18 10:20:27.510919 master-0 kubenswrapper[30420]: I0318 10:20:27.510847 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.511144 master-0 kubenswrapper[30420]: I0318 10:20:27.510993 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.511144 master-0 kubenswrapper[30420]: I0318 10:20:27.511132 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crkt9\" (UniqueName: \"kubernetes.io/projected/8d8cbe20-08bf-417a-9a3e-faa63cde3989-kube-api-access-crkt9\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.511717 master-0 kubenswrapper[30420]: I0318 10:20:27.511676 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.514845 master-0 kubenswrapper[30420]: I0318 10:20:27.512014 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.527242 master-0 kubenswrapper[30420]: I0318 10:20:27.527189 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crkt9\" (UniqueName: \"kubernetes.io/projected/8d8cbe20-08bf-417a-9a3e-faa63cde3989-kube-api-access-crkt9\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:27.707464 master-0 kubenswrapper[30420]: I0318 10:20:27.707336 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:28.147275 master-0 kubenswrapper[30420]: I0318 10:20:28.147171 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp"] Mar 18 10:20:28.646919 master-0 kubenswrapper[30420]: I0318 10:20:28.646837 30420 generic.go:334] "Generic (PLEG): container finished" podID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerID="c883cfecb493fb6a8ed9a09f7f7f9311c5ff4d60e3e3c17e8b7a989ec1719bbb" exitCode=0 Mar 18 10:20:28.647514 master-0 kubenswrapper[30420]: I0318 10:20:28.646922 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" event={"ID":"8d8cbe20-08bf-417a-9a3e-faa63cde3989","Type":"ContainerDied","Data":"c883cfecb493fb6a8ed9a09f7f7f9311c5ff4d60e3e3c17e8b7a989ec1719bbb"} Mar 18 10:20:28.647514 master-0 kubenswrapper[30420]: I0318 10:20:28.646969 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" event={"ID":"8d8cbe20-08bf-417a-9a3e-faa63cde3989","Type":"ContainerStarted","Data":"529c1b4f4c08a25a5308bd7e5cd638ef68a74ce756002f0d1b77bb3132f24fd7"} Mar 18 10:20:29.575920 master-0 kubenswrapper[30420]: I0318 10:20:29.575847 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml"] Mar 18 10:20:29.577722 master-0 kubenswrapper[30420]: I0318 10:20:29.577668 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.592766 master-0 kubenswrapper[30420]: I0318 10:20:29.591482 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml"] Mar 18 10:20:29.644619 master-0 kubenswrapper[30420]: I0318 10:20:29.644342 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.644874 master-0 kubenswrapper[30420]: I0318 10:20:29.644725 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knd6b\" (UniqueName: \"kubernetes.io/projected/f51dfe12-fe37-4594-8b6a-296fcba40dac-kube-api-access-knd6b\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.644874 master-0 kubenswrapper[30420]: I0318 10:20:29.644784 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.746849 master-0 kubenswrapper[30420]: I0318 10:20:29.746491 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.746849 master-0 kubenswrapper[30420]: I0318 10:20:29.746732 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knd6b\" (UniqueName: \"kubernetes.io/projected/f51dfe12-fe37-4594-8b6a-296fcba40dac-kube-api-access-knd6b\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.746849 master-0 kubenswrapper[30420]: I0318 10:20:29.746783 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.747585 master-0 kubenswrapper[30420]: I0318 10:20:29.747139 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.750947 master-0 kubenswrapper[30420]: I0318 10:20:29.747889 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.770201 master-0 kubenswrapper[30420]: I0318 10:20:29.770159 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knd6b\" (UniqueName: \"kubernetes.io/projected/f51dfe12-fe37-4594-8b6a-296fcba40dac-kube-api-access-knd6b\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:29.964387 master-0 kubenswrapper[30420]: I0318 10:20:29.964155 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:30.357460 master-0 kubenswrapper[30420]: I0318 10:20:30.357352 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf"] Mar 18 10:20:30.361122 master-0 kubenswrapper[30420]: I0318 10:20:30.360208 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.369176 master-0 kubenswrapper[30420]: I0318 10:20:30.369106 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf"] Mar 18 10:20:30.417000 master-0 kubenswrapper[30420]: I0318 10:20:30.416914 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml"] Mar 18 10:20:30.458677 master-0 kubenswrapper[30420]: I0318 10:20:30.458596 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.458929 master-0 kubenswrapper[30420]: I0318 10:20:30.458697 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.458929 master-0 kubenswrapper[30420]: I0318 10:20:30.458753 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs59k\" (UniqueName: \"kubernetes.io/projected/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-kube-api-access-vs59k\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.560712 master-0 kubenswrapper[30420]: I0318 10:20:30.560622 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs59k\" (UniqueName: \"kubernetes.io/projected/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-kube-api-access-vs59k\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.560998 master-0 kubenswrapper[30420]: I0318 10:20:30.560791 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.560998 master-0 kubenswrapper[30420]: I0318 10:20:30.560860 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.561434 master-0 kubenswrapper[30420]: I0318 10:20:30.561392 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.561633 master-0 kubenswrapper[30420]: I0318 10:20:30.561582 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.582563 master-0 kubenswrapper[30420]: I0318 10:20:30.582512 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs59k\" (UniqueName: \"kubernetes.io/projected/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-kube-api-access-vs59k\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:30.695069 master-0 kubenswrapper[30420]: I0318 10:20:30.694901 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:31.348533 master-0 kubenswrapper[30420]: W0318 10:20:31.347923 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51dfe12_fe37_4594_8b6a_296fcba40dac.slice/crio-7382fab8b648ac07e9219aa6d450b99b3a55dc1e76c922d12d12c80f9d1fba3b WatchSource:0}: Error finding container 7382fab8b648ac07e9219aa6d450b99b3a55dc1e76c922d12d12c80f9d1fba3b: Status 404 returned error can't find the container with id 7382fab8b648ac07e9219aa6d450b99b3a55dc1e76c922d12d12c80f9d1fba3b Mar 18 10:20:31.693147 master-0 kubenswrapper[30420]: I0318 10:20:31.693075 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" event={"ID":"8d8cbe20-08bf-417a-9a3e-faa63cde3989","Type":"ContainerStarted","Data":"556af305f8aed0c82d28b7a5fb625fa2d7c0c43175d1751ec86eded5c8b8c68c"} Mar 18 10:20:31.694495 master-0 kubenswrapper[30420]: I0318 10:20:31.694442 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" event={"ID":"f51dfe12-fe37-4594-8b6a-296fcba40dac","Type":"ContainerStarted","Data":"491ba996f326ea719c1d5570c861fce6085d559834083ac52c9ed1785b6019e2"} Mar 18 10:20:31.694589 master-0 kubenswrapper[30420]: I0318 10:20:31.694496 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" event={"ID":"f51dfe12-fe37-4594-8b6a-296fcba40dac","Type":"ContainerStarted","Data":"7382fab8b648ac07e9219aa6d450b99b3a55dc1e76c922d12d12c80f9d1fba3b"} Mar 18 10:20:31.909864 master-0 kubenswrapper[30420]: I0318 10:20:31.909698 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf"] Mar 18 10:20:31.941010 master-0 kubenswrapper[30420]: W0318 10:20:31.940919 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacc9dd36_b6fe_436b_9b35_321fcdb96b2c.slice/crio-6b9eddbd1c8ab0e91a88dc8c077934c390cdb5a614febe10e4a55a5739dc1b28 WatchSource:0}: Error finding container 6b9eddbd1c8ab0e91a88dc8c077934c390cdb5a614febe10e4a55a5739dc1b28: Status 404 returned error can't find the container with id 6b9eddbd1c8ab0e91a88dc8c077934c390cdb5a614febe10e4a55a5739dc1b28 Mar 18 10:20:32.705478 master-0 kubenswrapper[30420]: I0318 10:20:32.705315 30420 generic.go:334] "Generic (PLEG): container finished" podID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerID="556af305f8aed0c82d28b7a5fb625fa2d7c0c43175d1751ec86eded5c8b8c68c" exitCode=0 Mar 18 10:20:32.705478 master-0 kubenswrapper[30420]: I0318 10:20:32.705380 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" event={"ID":"8d8cbe20-08bf-417a-9a3e-faa63cde3989","Type":"ContainerDied","Data":"556af305f8aed0c82d28b7a5fb625fa2d7c0c43175d1751ec86eded5c8b8c68c"} Mar 18 10:20:32.709161 master-0 kubenswrapper[30420]: I0318 10:20:32.709060 30420 generic.go:334] "Generic (PLEG): container finished" podID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerID="491ba996f326ea719c1d5570c861fce6085d559834083ac52c9ed1785b6019e2" exitCode=0 Mar 18 10:20:32.709326 master-0 kubenswrapper[30420]: I0318 10:20:32.709286 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" event={"ID":"f51dfe12-fe37-4594-8b6a-296fcba40dac","Type":"ContainerDied","Data":"491ba996f326ea719c1d5570c861fce6085d559834083ac52c9ed1785b6019e2"} Mar 18 10:20:32.712926 master-0 kubenswrapper[30420]: I0318 10:20:32.712717 30420 generic.go:334] "Generic (PLEG): container finished" podID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerID="44c7e81f822e6d43a3c37fc6ec51e8b2f4485330ec37e4340af614a1c4ae9a4d" exitCode=0 Mar 18 10:20:32.712926 master-0 kubenswrapper[30420]: I0318 10:20:32.712780 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" event={"ID":"acc9dd36-b6fe-436b-9b35-321fcdb96b2c","Type":"ContainerDied","Data":"44c7e81f822e6d43a3c37fc6ec51e8b2f4485330ec37e4340af614a1c4ae9a4d"} Mar 18 10:20:32.712926 master-0 kubenswrapper[30420]: I0318 10:20:32.712818 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" event={"ID":"acc9dd36-b6fe-436b-9b35-321fcdb96b2c","Type":"ContainerStarted","Data":"6b9eddbd1c8ab0e91a88dc8c077934c390cdb5a614febe10e4a55a5739dc1b28"} Mar 18 10:20:33.726997 master-0 kubenswrapper[30420]: I0318 10:20:33.726935 30420 generic.go:334] "Generic (PLEG): container finished" podID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerID="17e496ffd67d81636e859eadfd64f77f14b5948c94ee96c08a61930eac3773a1" exitCode=0 Mar 18 10:20:33.726997 master-0 kubenswrapper[30420]: I0318 10:20:33.726988 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" event={"ID":"8d8cbe20-08bf-417a-9a3e-faa63cde3989","Type":"ContainerDied","Data":"17e496ffd67d81636e859eadfd64f77f14b5948c94ee96c08a61930eac3773a1"} Mar 18 10:20:34.743406 master-0 kubenswrapper[30420]: I0318 10:20:34.743202 30420 generic.go:334] "Generic (PLEG): container finished" podID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerID="ba4be455ed23e594a9a838bee54011961a974c33c3534b965d0c0016dae9dd2b" exitCode=0 Mar 18 10:20:34.744302 master-0 kubenswrapper[30420]: I0318 10:20:34.743385 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" event={"ID":"acc9dd36-b6fe-436b-9b35-321fcdb96b2c","Type":"ContainerDied","Data":"ba4be455ed23e594a9a838bee54011961a974c33c3534b965d0c0016dae9dd2b"} Mar 18 10:20:34.747189 master-0 kubenswrapper[30420]: I0318 10:20:34.747100 30420 generic.go:334] "Generic (PLEG): container finished" podID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerID="bdc4bb783c273e0fd49d7e73fa2ea443e38ef169977a4041e8e9275c233cf335" exitCode=0 Mar 18 10:20:34.747399 master-0 kubenswrapper[30420]: I0318 10:20:34.747234 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" event={"ID":"f51dfe12-fe37-4594-8b6a-296fcba40dac","Type":"ContainerDied","Data":"bdc4bb783c273e0fd49d7e73fa2ea443e38ef169977a4041e8e9275c233cf335"} Mar 18 10:20:35.120859 master-0 kubenswrapper[30420]: I0318 10:20:35.120811 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:35.241873 master-0 kubenswrapper[30420]: I0318 10:20:35.241769 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crkt9\" (UniqueName: \"kubernetes.io/projected/8d8cbe20-08bf-417a-9a3e-faa63cde3989-kube-api-access-crkt9\") pod \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " Mar 18 10:20:35.245046 master-0 kubenswrapper[30420]: I0318 10:20:35.242123 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-util\") pod \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " Mar 18 10:20:35.245046 master-0 kubenswrapper[30420]: I0318 10:20:35.242258 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-bundle\") pod \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\" (UID: \"8d8cbe20-08bf-417a-9a3e-faa63cde3989\") " Mar 18 10:20:35.245046 master-0 kubenswrapper[30420]: I0318 10:20:35.243369 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-bundle" (OuterVolumeSpecName: "bundle") pod "8d8cbe20-08bf-417a-9a3e-faa63cde3989" (UID: "8d8cbe20-08bf-417a-9a3e-faa63cde3989"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:35.248065 master-0 kubenswrapper[30420]: I0318 10:20:35.247999 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d8cbe20-08bf-417a-9a3e-faa63cde3989-kube-api-access-crkt9" (OuterVolumeSpecName: "kube-api-access-crkt9") pod "8d8cbe20-08bf-417a-9a3e-faa63cde3989" (UID: "8d8cbe20-08bf-417a-9a3e-faa63cde3989"). InnerVolumeSpecName "kube-api-access-crkt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:20:35.262801 master-0 kubenswrapper[30420]: I0318 10:20:35.261994 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-util" (OuterVolumeSpecName: "util") pod "8d8cbe20-08bf-417a-9a3e-faa63cde3989" (UID: "8d8cbe20-08bf-417a-9a3e-faa63cde3989"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:35.344442 master-0 kubenswrapper[30420]: I0318 10:20:35.344375 30420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-util\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:35.344719 master-0 kubenswrapper[30420]: I0318 10:20:35.344703 30420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d8cbe20-08bf-417a-9a3e-faa63cde3989-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:35.344815 master-0 kubenswrapper[30420]: I0318 10:20:35.344798 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crkt9\" (UniqueName: \"kubernetes.io/projected/8d8cbe20-08bf-417a-9a3e-faa63cde3989-kube-api-access-crkt9\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:35.776719 master-0 kubenswrapper[30420]: I0318 10:20:35.776637 30420 generic.go:334] "Generic (PLEG): container finished" podID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerID="ddd21ceb922d8c030206e84bc8562d0c5ae87e11fa0dafff618c423aa1ab35fa" exitCode=0 Mar 18 10:20:35.777955 master-0 kubenswrapper[30420]: I0318 10:20:35.776758 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" event={"ID":"acc9dd36-b6fe-436b-9b35-321fcdb96b2c","Type":"ContainerDied","Data":"ddd21ceb922d8c030206e84bc8562d0c5ae87e11fa0dafff618c423aa1ab35fa"} Mar 18 10:20:35.780360 master-0 kubenswrapper[30420]: I0318 10:20:35.780317 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" event={"ID":"8d8cbe20-08bf-417a-9a3e-faa63cde3989","Type":"ContainerDied","Data":"529c1b4f4c08a25a5308bd7e5cd638ef68a74ce756002f0d1b77bb3132f24fd7"} Mar 18 10:20:35.780479 master-0 kubenswrapper[30420]: I0318 10:20:35.780364 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="529c1b4f4c08a25a5308bd7e5cd638ef68a74ce756002f0d1b77bb3132f24fd7" Mar 18 10:20:35.780642 master-0 kubenswrapper[30420]: I0318 10:20:35.780343 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54w8cp" Mar 18 10:20:35.786172 master-0 kubenswrapper[30420]: I0318 10:20:35.786124 30420 generic.go:334] "Generic (PLEG): container finished" podID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerID="89477769ccf57552e1de470f8cce2dec95da15b4e08515484fc1e6d9600f5b9f" exitCode=0 Mar 18 10:20:35.786310 master-0 kubenswrapper[30420]: I0318 10:20:35.786179 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" event={"ID":"f51dfe12-fe37-4594-8b6a-296fcba40dac","Type":"ContainerDied","Data":"89477769ccf57552e1de470f8cce2dec95da15b4e08515484fc1e6d9600f5b9f"} Mar 18 10:20:35.796433 master-0 kubenswrapper[30420]: I0318 10:20:35.796347 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj"] Mar 18 10:20:35.796802 master-0 kubenswrapper[30420]: E0318 10:20:35.796773 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerName="pull" Mar 18 10:20:35.796971 master-0 kubenswrapper[30420]: I0318 10:20:35.796812 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerName="pull" Mar 18 10:20:35.796971 master-0 kubenswrapper[30420]: E0318 10:20:35.796913 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerName="util" Mar 18 10:20:35.796971 master-0 kubenswrapper[30420]: I0318 10:20:35.796928 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerName="util" Mar 18 10:20:35.796971 master-0 kubenswrapper[30420]: E0318 10:20:35.796943 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerName="extract" Mar 18 10:20:35.796971 master-0 kubenswrapper[30420]: I0318 10:20:35.796953 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerName="extract" Mar 18 10:20:35.797244 master-0 kubenswrapper[30420]: I0318 10:20:35.797209 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d8cbe20-08bf-417a-9a3e-faa63cde3989" containerName="extract" Mar 18 10:20:35.798720 master-0 kubenswrapper[30420]: I0318 10:20:35.798648 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:35.812281 master-0 kubenswrapper[30420]: I0318 10:20:35.812226 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj"] Mar 18 10:20:35.954917 master-0 kubenswrapper[30420]: I0318 10:20:35.954786 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:35.954917 master-0 kubenswrapper[30420]: I0318 10:20:35.954897 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:35.955244 master-0 kubenswrapper[30420]: I0318 10:20:35.955182 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttptq\" (UniqueName: \"kubernetes.io/projected/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-kube-api-access-ttptq\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:36.057339 master-0 kubenswrapper[30420]: I0318 10:20:36.057276 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:36.057719 master-0 kubenswrapper[30420]: I0318 10:20:36.057698 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttptq\" (UniqueName: \"kubernetes.io/projected/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-kube-api-access-ttptq\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:36.057888 master-0 kubenswrapper[30420]: I0318 10:20:36.057870 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:36.058052 master-0 kubenswrapper[30420]: I0318 10:20:36.057876 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:36.059583 master-0 kubenswrapper[30420]: I0318 10:20:36.059541 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:36.078961 master-0 kubenswrapper[30420]: I0318 10:20:36.078917 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttptq\" (UniqueName: \"kubernetes.io/projected/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-kube-api-access-ttptq\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:36.130315 master-0 kubenswrapper[30420]: I0318 10:20:36.130242 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:36.582941 master-0 kubenswrapper[30420]: I0318 10:20:36.582494 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj"] Mar 18 10:20:36.582941 master-0 kubenswrapper[30420]: W0318 10:20:36.582801 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57ce87d6_d0bb_4504_ba05_d31b48cd5da6.slice/crio-40dc5e5d97aaca47b82813019ec4adb13dfe29cb4c041d16ddc6f217339e8d9f WatchSource:0}: Error finding container 40dc5e5d97aaca47b82813019ec4adb13dfe29cb4c041d16ddc6f217339e8d9f: Status 404 returned error can't find the container with id 40dc5e5d97aaca47b82813019ec4adb13dfe29cb4c041d16ddc6f217339e8d9f Mar 18 10:20:36.795021 master-0 kubenswrapper[30420]: I0318 10:20:36.794882 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" event={"ID":"57ce87d6-d0bb-4504-ba05-d31b48cd5da6","Type":"ContainerStarted","Data":"ba1b113732070fd892bfa8af212e304c6d2efdbc9fb5f03148feec4e29927df7"} Mar 18 10:20:36.795021 master-0 kubenswrapper[30420]: I0318 10:20:36.794951 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" event={"ID":"57ce87d6-d0bb-4504-ba05-d31b48cd5da6","Type":"ContainerStarted","Data":"40dc5e5d97aaca47b82813019ec4adb13dfe29cb4c041d16ddc6f217339e8d9f"} Mar 18 10:20:37.212190 master-0 kubenswrapper[30420]: I0318 10:20:37.212133 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:37.222317 master-0 kubenswrapper[30420]: I0318 10:20:37.221098 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:37.377900 master-0 kubenswrapper[30420]: I0318 10:20:37.377734 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-bundle\") pod \"f51dfe12-fe37-4594-8b6a-296fcba40dac\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " Mar 18 10:20:37.377900 master-0 kubenswrapper[30420]: I0318 10:20:37.377904 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knd6b\" (UniqueName: \"kubernetes.io/projected/f51dfe12-fe37-4594-8b6a-296fcba40dac-kube-api-access-knd6b\") pod \"f51dfe12-fe37-4594-8b6a-296fcba40dac\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " Mar 18 10:20:37.378401 master-0 kubenswrapper[30420]: I0318 10:20:37.378097 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-util\") pod \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " Mar 18 10:20:37.378633 master-0 kubenswrapper[30420]: I0318 10:20:37.378598 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-util\") pod \"f51dfe12-fe37-4594-8b6a-296fcba40dac\" (UID: \"f51dfe12-fe37-4594-8b6a-296fcba40dac\") " Mar 18 10:20:37.378701 master-0 kubenswrapper[30420]: I0318 10:20:37.378660 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs59k\" (UniqueName: \"kubernetes.io/projected/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-kube-api-access-vs59k\") pod \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " Mar 18 10:20:37.379261 master-0 kubenswrapper[30420]: I0318 10:20:37.379185 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-bundle" (OuterVolumeSpecName: "bundle") pod "f51dfe12-fe37-4594-8b6a-296fcba40dac" (UID: "f51dfe12-fe37-4594-8b6a-296fcba40dac"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:37.379511 master-0 kubenswrapper[30420]: I0318 10:20:37.379474 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-bundle\") pod \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\" (UID: \"acc9dd36-b6fe-436b-9b35-321fcdb96b2c\") " Mar 18 10:20:37.380301 master-0 kubenswrapper[30420]: I0318 10:20:37.380238 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-bundle" (OuterVolumeSpecName: "bundle") pod "acc9dd36-b6fe-436b-9b35-321fcdb96b2c" (UID: "acc9dd36-b6fe-436b-9b35-321fcdb96b2c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:37.380789 master-0 kubenswrapper[30420]: I0318 10:20:37.380723 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51dfe12-fe37-4594-8b6a-296fcba40dac-kube-api-access-knd6b" (OuterVolumeSpecName: "kube-api-access-knd6b") pod "f51dfe12-fe37-4594-8b6a-296fcba40dac" (UID: "f51dfe12-fe37-4594-8b6a-296fcba40dac"). InnerVolumeSpecName "kube-api-access-knd6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:20:37.383311 master-0 kubenswrapper[30420]: I0318 10:20:37.383263 30420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:37.383430 master-0 kubenswrapper[30420]: I0318 10:20:37.383328 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knd6b\" (UniqueName: \"kubernetes.io/projected/f51dfe12-fe37-4594-8b6a-296fcba40dac-kube-api-access-knd6b\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:37.387262 master-0 kubenswrapper[30420]: I0318 10:20:37.387038 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-kube-api-access-vs59k" (OuterVolumeSpecName: "kube-api-access-vs59k") pod "acc9dd36-b6fe-436b-9b35-321fcdb96b2c" (UID: "acc9dd36-b6fe-436b-9b35-321fcdb96b2c"). InnerVolumeSpecName "kube-api-access-vs59k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:20:37.393066 master-0 kubenswrapper[30420]: I0318 10:20:37.393002 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-util" (OuterVolumeSpecName: "util") pod "acc9dd36-b6fe-436b-9b35-321fcdb96b2c" (UID: "acc9dd36-b6fe-436b-9b35-321fcdb96b2c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:37.396526 master-0 kubenswrapper[30420]: I0318 10:20:37.396482 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-util" (OuterVolumeSpecName: "util") pod "f51dfe12-fe37-4594-8b6a-296fcba40dac" (UID: "f51dfe12-fe37-4594-8b6a-296fcba40dac"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:37.485848 master-0 kubenswrapper[30420]: I0318 10:20:37.485631 30420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-util\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:37.486337 master-0 kubenswrapper[30420]: I0318 10:20:37.486264 30420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f51dfe12-fe37-4594-8b6a-296fcba40dac-util\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:37.486484 master-0 kubenswrapper[30420]: I0318 10:20:37.486459 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs59k\" (UniqueName: \"kubernetes.io/projected/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-kube-api-access-vs59k\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:37.486597 master-0 kubenswrapper[30420]: I0318 10:20:37.486578 30420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/acc9dd36-b6fe-436b-9b35-321fcdb96b2c-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:37.804180 master-0 kubenswrapper[30420]: I0318 10:20:37.804095 30420 generic.go:334] "Generic (PLEG): container finished" podID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerID="ba1b113732070fd892bfa8af212e304c6d2efdbc9fb5f03148feec4e29927df7" exitCode=0 Mar 18 10:20:37.805289 master-0 kubenswrapper[30420]: I0318 10:20:37.804203 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" event={"ID":"57ce87d6-d0bb-4504-ba05-d31b48cd5da6","Type":"ContainerDied","Data":"ba1b113732070fd892bfa8af212e304c6d2efdbc9fb5f03148feec4e29927df7"} Mar 18 10:20:37.810890 master-0 kubenswrapper[30420]: I0318 10:20:37.810762 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" event={"ID":"acc9dd36-b6fe-436b-9b35-321fcdb96b2c","Type":"ContainerDied","Data":"6b9eddbd1c8ab0e91a88dc8c077934c390cdb5a614febe10e4a55a5739dc1b28"} Mar 18 10:20:37.810890 master-0 kubenswrapper[30420]: I0318 10:20:37.810809 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wssvf" Mar 18 10:20:37.811359 master-0 kubenswrapper[30420]: I0318 10:20:37.810856 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9eddbd1c8ab0e91a88dc8c077934c390cdb5a614febe10e4a55a5739dc1b28" Mar 18 10:20:37.814321 master-0 kubenswrapper[30420]: I0318 10:20:37.814235 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" event={"ID":"f51dfe12-fe37-4594-8b6a-296fcba40dac","Type":"ContainerDied","Data":"7382fab8b648ac07e9219aa6d450b99b3a55dc1e76c922d12d12c80f9d1fba3b"} Mar 18 10:20:37.814321 master-0 kubenswrapper[30420]: I0318 10:20:37.814287 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7382fab8b648ac07e9219aa6d450b99b3a55dc1e76c922d12d12c80f9d1fba3b" Mar 18 10:20:37.814851 master-0 kubenswrapper[30420]: I0318 10:20:37.814365 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c17slml" Mar 18 10:20:40.472567 master-0 kubenswrapper[30420]: I0318 10:20:40.472505 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb"] Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: E0318 10:20:40.472787 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerName="pull" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: I0318 10:20:40.472800 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerName="pull" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: E0318 10:20:40.472807 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerName="extract" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: I0318 10:20:40.472814 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerName="extract" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: E0318 10:20:40.472847 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerName="extract" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: I0318 10:20:40.472854 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerName="extract" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: E0318 10:20:40.472863 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerName="pull" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: I0318 10:20:40.472869 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerName="pull" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: E0318 10:20:40.472879 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerName="util" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: I0318 10:20:40.472884 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerName="util" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: E0318 10:20:40.472910 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerName="util" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: I0318 10:20:40.472916 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerName="util" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: I0318 10:20:40.473032 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51dfe12-fe37-4594-8b6a-296fcba40dac" containerName="extract" Mar 18 10:20:40.473215 master-0 kubenswrapper[30420]: I0318 10:20:40.473125 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc9dd36-b6fe-436b-9b35-321fcdb96b2c" containerName="extract" Mar 18 10:20:40.488476 master-0 kubenswrapper[30420]: I0318 10:20:40.488295 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" Mar 18 10:20:40.491477 master-0 kubenswrapper[30420]: I0318 10:20:40.490791 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 18 10:20:40.493286 master-0 kubenswrapper[30420]: I0318 10:20:40.493195 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 18 10:20:40.509186 master-0 kubenswrapper[30420]: I0318 10:20:40.506407 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb"] Mar 18 10:20:40.655971 master-0 kubenswrapper[30420]: I0318 10:20:40.655890 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/978c8d14-d6a3-4f44-bf82-8640fa8b4db4-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6h4pb\" (UID: \"978c8d14-d6a3-4f44-bf82-8640fa8b4db4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" Mar 18 10:20:40.656241 master-0 kubenswrapper[30420]: I0318 10:20:40.656017 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86nnc\" (UniqueName: \"kubernetes.io/projected/978c8d14-d6a3-4f44-bf82-8640fa8b4db4-kube-api-access-86nnc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6h4pb\" (UID: \"978c8d14-d6a3-4f44-bf82-8640fa8b4db4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" Mar 18 10:20:40.758292 master-0 kubenswrapper[30420]: I0318 10:20:40.758123 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86nnc\" (UniqueName: \"kubernetes.io/projected/978c8d14-d6a3-4f44-bf82-8640fa8b4db4-kube-api-access-86nnc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6h4pb\" (UID: \"978c8d14-d6a3-4f44-bf82-8640fa8b4db4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" Mar 18 10:20:40.758508 master-0 kubenswrapper[30420]: I0318 10:20:40.758322 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/978c8d14-d6a3-4f44-bf82-8640fa8b4db4-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6h4pb\" (UID: \"978c8d14-d6a3-4f44-bf82-8640fa8b4db4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" Mar 18 10:20:40.758925 master-0 kubenswrapper[30420]: I0318 10:20:40.758891 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/978c8d14-d6a3-4f44-bf82-8640fa8b4db4-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6h4pb\" (UID: \"978c8d14-d6a3-4f44-bf82-8640fa8b4db4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" Mar 18 10:20:40.774979 master-0 kubenswrapper[30420]: I0318 10:20:40.774912 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86nnc\" (UniqueName: \"kubernetes.io/projected/978c8d14-d6a3-4f44-bf82-8640fa8b4db4-kube-api-access-86nnc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-6h4pb\" (UID: \"978c8d14-d6a3-4f44-bf82-8640fa8b4db4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" Mar 18 10:20:40.828385 master-0 kubenswrapper[30420]: I0318 10:20:40.828304 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" Mar 18 10:20:40.845695 master-0 kubenswrapper[30420]: I0318 10:20:40.845623 30420 generic.go:334] "Generic (PLEG): container finished" podID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerID="f081165b393058d7619c7e25db473ea6c1165536c3e421fb42b5a7203768b799" exitCode=0 Mar 18 10:20:40.845695 master-0 kubenswrapper[30420]: I0318 10:20:40.845672 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" event={"ID":"57ce87d6-d0bb-4504-ba05-d31b48cd5da6","Type":"ContainerDied","Data":"f081165b393058d7619c7e25db473ea6c1165536c3e421fb42b5a7203768b799"} Mar 18 10:20:41.380370 master-0 kubenswrapper[30420]: I0318 10:20:41.380294 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb"] Mar 18 10:20:41.389307 master-0 kubenswrapper[30420]: W0318 10:20:41.389213 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod978c8d14_d6a3_4f44_bf82_8640fa8b4db4.slice/crio-cf80d924e142c6e01cf42f1f5776609fe73bf4788fabb210fa287f70f9d53aea WatchSource:0}: Error finding container cf80d924e142c6e01cf42f1f5776609fe73bf4788fabb210fa287f70f9d53aea: Status 404 returned error can't find the container with id cf80d924e142c6e01cf42f1f5776609fe73bf4788fabb210fa287f70f9d53aea Mar 18 10:20:41.855536 master-0 kubenswrapper[30420]: I0318 10:20:41.855471 30420 generic.go:334] "Generic (PLEG): container finished" podID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerID="8d662192e9fac6fc1fed20bb4564e58d0d34bf1235ded580248c5b0527dd146a" exitCode=0 Mar 18 10:20:41.856103 master-0 kubenswrapper[30420]: I0318 10:20:41.855528 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" event={"ID":"57ce87d6-d0bb-4504-ba05-d31b48cd5da6","Type":"ContainerDied","Data":"8d662192e9fac6fc1fed20bb4564e58d0d34bf1235ded580248c5b0527dd146a"} Mar 18 10:20:41.856699 master-0 kubenswrapper[30420]: I0318 10:20:41.856668 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" event={"ID":"978c8d14-d6a3-4f44-bf82-8640fa8b4db4","Type":"ContainerStarted","Data":"cf80d924e142c6e01cf42f1f5776609fe73bf4788fabb210fa287f70f9d53aea"} Mar 18 10:20:43.364158 master-0 kubenswrapper[30420]: I0318 10:20:43.364103 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:43.506753 master-0 kubenswrapper[30420]: I0318 10:20:43.506661 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttptq\" (UniqueName: \"kubernetes.io/projected/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-kube-api-access-ttptq\") pod \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " Mar 18 10:20:43.506753 master-0 kubenswrapper[30420]: I0318 10:20:43.506754 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-util\") pod \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " Mar 18 10:20:43.507085 master-0 kubenswrapper[30420]: I0318 10:20:43.506845 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-bundle\") pod \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\" (UID: \"57ce87d6-d0bb-4504-ba05-d31b48cd5da6\") " Mar 18 10:20:43.510514 master-0 kubenswrapper[30420]: I0318 10:20:43.510460 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-bundle" (OuterVolumeSpecName: "bundle") pod "57ce87d6-d0bb-4504-ba05-d31b48cd5da6" (UID: "57ce87d6-d0bb-4504-ba05-d31b48cd5da6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:43.510846 master-0 kubenswrapper[30420]: I0318 10:20:43.510773 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-kube-api-access-ttptq" (OuterVolumeSpecName: "kube-api-access-ttptq") pod "57ce87d6-d0bb-4504-ba05-d31b48cd5da6" (UID: "57ce87d6-d0bb-4504-ba05-d31b48cd5da6"). InnerVolumeSpecName "kube-api-access-ttptq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:20:43.517697 master-0 kubenswrapper[30420]: I0318 10:20:43.517634 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-util" (OuterVolumeSpecName: "util") pod "57ce87d6-d0bb-4504-ba05-d31b48cd5da6" (UID: "57ce87d6-d0bb-4504-ba05-d31b48cd5da6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 10:20:43.608550 master-0 kubenswrapper[30420]: I0318 10:20:43.608459 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttptq\" (UniqueName: \"kubernetes.io/projected/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-kube-api-access-ttptq\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:43.608550 master-0 kubenswrapper[30420]: I0318 10:20:43.608500 30420 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-util\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:43.608550 master-0 kubenswrapper[30420]: I0318 10:20:43.608514 30420 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ce87d6-d0bb-4504-ba05-d31b48cd5da6-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:20:43.879596 master-0 kubenswrapper[30420]: I0318 10:20:43.879538 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" event={"ID":"57ce87d6-d0bb-4504-ba05-d31b48cd5da6","Type":"ContainerDied","Data":"40dc5e5d97aaca47b82813019ec4adb13dfe29cb4c041d16ddc6f217339e8d9f"} Mar 18 10:20:43.879596 master-0 kubenswrapper[30420]: I0318 10:20:43.879584 30420 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40dc5e5d97aaca47b82813019ec4adb13dfe29cb4c041d16ddc6f217339e8d9f" Mar 18 10:20:43.879964 master-0 kubenswrapper[30420]: I0318 10:20:43.879642 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726jx4nj" Mar 18 10:20:44.892683 master-0 kubenswrapper[30420]: I0318 10:20:44.892517 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" event={"ID":"978c8d14-d6a3-4f44-bf82-8640fa8b4db4","Type":"ContainerStarted","Data":"4fb438904e28cb84c616d450fbf32a3353fcd859563c8d7a6199f9b1706d39fc"} Mar 18 10:20:44.964222 master-0 kubenswrapper[30420]: I0318 10:20:44.964113 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-6h4pb" podStartSLOduration=1.739409174 podStartE2EDuration="4.964086098s" podCreationTimestamp="2026-03-18 10:20:40 +0000 UTC" firstStartedPulling="2026-03-18 10:20:41.392472266 +0000 UTC m=+605.445218195" lastFinishedPulling="2026-03-18 10:20:44.61714915 +0000 UTC m=+608.669895119" observedRunningTime="2026-03-18 10:20:44.955947344 +0000 UTC m=+609.008693283" watchObservedRunningTime="2026-03-18 10:20:44.964086098 +0000 UTC m=+609.016832057" Mar 18 10:20:51.667681 master-0 kubenswrapper[30420]: I0318 10:20:51.667601 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4zmkn"] Mar 18 10:20:51.668604 master-0 kubenswrapper[30420]: E0318 10:20:51.668060 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerName="util" Mar 18 10:20:51.668604 master-0 kubenswrapper[30420]: I0318 10:20:51.668085 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerName="util" Mar 18 10:20:51.668604 master-0 kubenswrapper[30420]: E0318 10:20:51.668128 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerName="pull" Mar 18 10:20:51.668604 master-0 kubenswrapper[30420]: I0318 10:20:51.668140 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerName="pull" Mar 18 10:20:51.668604 master-0 kubenswrapper[30420]: E0318 10:20:51.668180 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerName="extract" Mar 18 10:20:51.668604 master-0 kubenswrapper[30420]: I0318 10:20:51.668192 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerName="extract" Mar 18 10:20:51.668604 master-0 kubenswrapper[30420]: I0318 10:20:51.668435 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ce87d6-d0bb-4504-ba05-d31b48cd5da6" containerName="extract" Mar 18 10:20:51.669365 master-0 kubenswrapper[30420]: I0318 10:20:51.669325 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" Mar 18 10:20:51.672336 master-0 kubenswrapper[30420]: I0318 10:20:51.672298 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 18 10:20:51.674629 master-0 kubenswrapper[30420]: I0318 10:20:51.674586 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 18 10:20:51.681573 master-0 kubenswrapper[30420]: I0318 10:20:51.681522 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4zmkn"] Mar 18 10:20:51.844552 master-0 kubenswrapper[30420]: I0318 10:20:51.844471 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbc2122-512f-4056-8572-80126bea4f0c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4zmkn\" (UID: \"4bbc2122-512f-4056-8572-80126bea4f0c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" Mar 18 10:20:51.844879 master-0 kubenswrapper[30420]: I0318 10:20:51.844662 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75mcz\" (UniqueName: \"kubernetes.io/projected/4bbc2122-512f-4056-8572-80126bea4f0c-kube-api-access-75mcz\") pod \"cert-manager-cainjector-5545bd876-4zmkn\" (UID: \"4bbc2122-512f-4056-8572-80126bea4f0c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" Mar 18 10:20:51.947269 master-0 kubenswrapper[30420]: I0318 10:20:51.947072 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbc2122-512f-4056-8572-80126bea4f0c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4zmkn\" (UID: \"4bbc2122-512f-4056-8572-80126bea4f0c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" Mar 18 10:20:51.947808 master-0 kubenswrapper[30420]: I0318 10:20:51.947747 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75mcz\" (UniqueName: \"kubernetes.io/projected/4bbc2122-512f-4056-8572-80126bea4f0c-kube-api-access-75mcz\") pod \"cert-manager-cainjector-5545bd876-4zmkn\" (UID: \"4bbc2122-512f-4056-8572-80126bea4f0c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" Mar 18 10:20:51.973396 master-0 kubenswrapper[30420]: I0318 10:20:51.971263 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75mcz\" (UniqueName: \"kubernetes.io/projected/4bbc2122-512f-4056-8572-80126bea4f0c-kube-api-access-75mcz\") pod \"cert-manager-cainjector-5545bd876-4zmkn\" (UID: \"4bbc2122-512f-4056-8572-80126bea4f0c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" Mar 18 10:20:51.973396 master-0 kubenswrapper[30420]: I0318 10:20:51.971774 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbc2122-512f-4056-8572-80126bea4f0c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4zmkn\" (UID: \"4bbc2122-512f-4056-8572-80126bea4f0c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" Mar 18 10:20:52.039958 master-0 kubenswrapper[30420]: I0318 10:20:52.039882 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" Mar 18 10:20:52.704195 master-0 kubenswrapper[30420]: I0318 10:20:52.704135 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4zmkn"] Mar 18 10:20:52.961256 master-0 kubenswrapper[30420]: I0318 10:20:52.961081 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" event={"ID":"4bbc2122-512f-4056-8572-80126bea4f0c","Type":"ContainerStarted","Data":"9e58986a7f3f553e6eec501a9d11b5ba6d2dc4d66b7eb7249c5731d54f6cd4f9"} Mar 18 10:20:52.980047 master-0 kubenswrapper[30420]: I0318 10:20:52.979966 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-7z65r"] Mar 18 10:20:52.981737 master-0 kubenswrapper[30420]: I0318 10:20:52.981685 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-7z65r" Mar 18 10:20:52.984026 master-0 kubenswrapper[30420]: I0318 10:20:52.983982 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 18 10:20:52.984362 master-0 kubenswrapper[30420]: I0318 10:20:52.984303 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 18 10:20:52.996548 master-0 kubenswrapper[30420]: I0318 10:20:52.996487 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-7z65r"] Mar 18 10:20:53.062088 master-0 kubenswrapper[30420]: I0318 10:20:53.062006 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z94dw\" (UniqueName: \"kubernetes.io/projected/9ac1f807-09b8-4fd1-be56-682238c80007-kube-api-access-z94dw\") pod \"nmstate-operator-796d4cfff4-7z65r\" (UID: \"9ac1f807-09b8-4fd1-be56-682238c80007\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-7z65r" Mar 18 10:20:53.164265 master-0 kubenswrapper[30420]: I0318 10:20:53.164204 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z94dw\" (UniqueName: \"kubernetes.io/projected/9ac1f807-09b8-4fd1-be56-682238c80007-kube-api-access-z94dw\") pod \"nmstate-operator-796d4cfff4-7z65r\" (UID: \"9ac1f807-09b8-4fd1-be56-682238c80007\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-7z65r" Mar 18 10:20:53.184866 master-0 kubenswrapper[30420]: I0318 10:20:53.182652 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z94dw\" (UniqueName: \"kubernetes.io/projected/9ac1f807-09b8-4fd1-be56-682238c80007-kube-api-access-z94dw\") pod \"nmstate-operator-796d4cfff4-7z65r\" (UID: \"9ac1f807-09b8-4fd1-be56-682238c80007\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-7z65r" Mar 18 10:20:53.303533 master-0 kubenswrapper[30420]: I0318 10:20:53.303481 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-7z65r" Mar 18 10:20:53.717176 master-0 kubenswrapper[30420]: I0318 10:20:53.717125 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-7z65r"] Mar 18 10:20:53.720748 master-0 kubenswrapper[30420]: W0318 10:20:53.720708 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac1f807_09b8_4fd1_be56_682238c80007.slice/crio-5f8a2da42873df0626c53c3db5237965d75bf8074935dc4c6a53021fab2fc0f6 WatchSource:0}: Error finding container 5f8a2da42873df0626c53c3db5237965d75bf8074935dc4c6a53021fab2fc0f6: Status 404 returned error can't find the container with id 5f8a2da42873df0626c53c3db5237965d75bf8074935dc4c6a53021fab2fc0f6 Mar 18 10:20:53.974101 master-0 kubenswrapper[30420]: I0318 10:20:53.973957 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-7z65r" event={"ID":"9ac1f807-09b8-4fd1-be56-682238c80007","Type":"ContainerStarted","Data":"5f8a2da42873df0626c53c3db5237965d75bf8074935dc4c6a53021fab2fc0f6"} Mar 18 10:20:54.413643 master-0 kubenswrapper[30420]: I0318 10:20:54.413590 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-29ng5"] Mar 18 10:20:54.414579 master-0 kubenswrapper[30420]: I0318 10:20:54.414550 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:20:54.423292 master-0 kubenswrapper[30420]: I0318 10:20:54.422927 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-29ng5"] Mar 18 10:20:54.491168 master-0 kubenswrapper[30420]: I0318 10:20:54.491072 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxzjj\" (UniqueName: \"kubernetes.io/projected/2792fe14-2599-454d-9b93-0587ac7086bd-kube-api-access-fxzjj\") pod \"cert-manager-webhook-6888856db4-29ng5\" (UID: \"2792fe14-2599-454d-9b93-0587ac7086bd\") " pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:20:54.491403 master-0 kubenswrapper[30420]: I0318 10:20:54.491264 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2792fe14-2599-454d-9b93-0587ac7086bd-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-29ng5\" (UID: \"2792fe14-2599-454d-9b93-0587ac7086bd\") " pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:20:54.592936 master-0 kubenswrapper[30420]: I0318 10:20:54.592598 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxzjj\" (UniqueName: \"kubernetes.io/projected/2792fe14-2599-454d-9b93-0587ac7086bd-kube-api-access-fxzjj\") pod \"cert-manager-webhook-6888856db4-29ng5\" (UID: \"2792fe14-2599-454d-9b93-0587ac7086bd\") " pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:20:54.593907 master-0 kubenswrapper[30420]: I0318 10:20:54.593845 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2792fe14-2599-454d-9b93-0587ac7086bd-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-29ng5\" (UID: \"2792fe14-2599-454d-9b93-0587ac7086bd\") " pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:20:54.614423 master-0 kubenswrapper[30420]: I0318 10:20:54.611810 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2792fe14-2599-454d-9b93-0587ac7086bd-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-29ng5\" (UID: \"2792fe14-2599-454d-9b93-0587ac7086bd\") " pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:20:54.615926 master-0 kubenswrapper[30420]: I0318 10:20:54.615890 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxzjj\" (UniqueName: \"kubernetes.io/projected/2792fe14-2599-454d-9b93-0587ac7086bd-kube-api-access-fxzjj\") pod \"cert-manager-webhook-6888856db4-29ng5\" (UID: \"2792fe14-2599-454d-9b93-0587ac7086bd\") " pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:20:54.740578 master-0 kubenswrapper[30420]: I0318 10:20:54.740458 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:20:55.207956 master-0 kubenswrapper[30420]: I0318 10:20:55.207878 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-29ng5"] Mar 18 10:20:55.233077 master-0 kubenswrapper[30420]: W0318 10:20:55.233024 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2792fe14_2599_454d_9b93_0587ac7086bd.slice/crio-9c334055ad6875d533eb9bc575849ca134731d6d9a6d0b976833f2844fab19d7 WatchSource:0}: Error finding container 9c334055ad6875d533eb9bc575849ca134731d6d9a6d0b976833f2844fab19d7: Status 404 returned error can't find the container with id 9c334055ad6875d533eb9bc575849ca134731d6d9a6d0b976833f2844fab19d7 Mar 18 10:20:56.001802 master-0 kubenswrapper[30420]: I0318 10:20:56.001706 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" event={"ID":"2792fe14-2599-454d-9b93-0587ac7086bd","Type":"ContainerStarted","Data":"9c334055ad6875d533eb9bc575849ca134731d6d9a6d0b976833f2844fab19d7"} Mar 18 10:20:57.830369 master-0 kubenswrapper[30420]: I0318 10:20:57.830278 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-zhsw7"] Mar 18 10:20:57.831450 master-0 kubenswrapper[30420]: I0318 10:20:57.831410 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-zhsw7" Mar 18 10:20:57.867690 master-0 kubenswrapper[30420]: I0318 10:20:57.865164 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-zhsw7"] Mar 18 10:20:57.982307 master-0 kubenswrapper[30420]: I0318 10:20:57.982248 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7lr\" (UniqueName: \"kubernetes.io/projected/7d7e7ff4-332d-434b-9c84-c14686401897-kube-api-access-mp7lr\") pod \"cert-manager-545d4d4674-zhsw7\" (UID: \"7d7e7ff4-332d-434b-9c84-c14686401897\") " pod="cert-manager/cert-manager-545d4d4674-zhsw7" Mar 18 10:20:57.982586 master-0 kubenswrapper[30420]: I0318 10:20:57.982330 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d7e7ff4-332d-434b-9c84-c14686401897-bound-sa-token\") pod \"cert-manager-545d4d4674-zhsw7\" (UID: \"7d7e7ff4-332d-434b-9c84-c14686401897\") " pod="cert-manager/cert-manager-545d4d4674-zhsw7" Mar 18 10:20:58.084197 master-0 kubenswrapper[30420]: I0318 10:20:58.083996 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp7lr\" (UniqueName: \"kubernetes.io/projected/7d7e7ff4-332d-434b-9c84-c14686401897-kube-api-access-mp7lr\") pod \"cert-manager-545d4d4674-zhsw7\" (UID: \"7d7e7ff4-332d-434b-9c84-c14686401897\") " pod="cert-manager/cert-manager-545d4d4674-zhsw7" Mar 18 10:20:58.084553 master-0 kubenswrapper[30420]: I0318 10:20:58.084496 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d7e7ff4-332d-434b-9c84-c14686401897-bound-sa-token\") pod \"cert-manager-545d4d4674-zhsw7\" (UID: \"7d7e7ff4-332d-434b-9c84-c14686401897\") " pod="cert-manager/cert-manager-545d4d4674-zhsw7" Mar 18 10:20:58.119735 master-0 kubenswrapper[30420]: I0318 10:20:58.119678 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d7e7ff4-332d-434b-9c84-c14686401897-bound-sa-token\") pod \"cert-manager-545d4d4674-zhsw7\" (UID: \"7d7e7ff4-332d-434b-9c84-c14686401897\") " pod="cert-manager/cert-manager-545d4d4674-zhsw7" Mar 18 10:20:58.122579 master-0 kubenswrapper[30420]: I0318 10:20:58.122545 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp7lr\" (UniqueName: \"kubernetes.io/projected/7d7e7ff4-332d-434b-9c84-c14686401897-kube-api-access-mp7lr\") pod \"cert-manager-545d4d4674-zhsw7\" (UID: \"7d7e7ff4-332d-434b-9c84-c14686401897\") " pod="cert-manager/cert-manager-545d4d4674-zhsw7" Mar 18 10:20:58.187784 master-0 kubenswrapper[30420]: I0318 10:20:58.187726 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-zhsw7" Mar 18 10:20:59.814844 master-0 kubenswrapper[30420]: I0318 10:20:59.813224 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2"] Mar 18 10:20:59.819870 master-0 kubenswrapper[30420]: I0318 10:20:59.817344 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:20:59.828852 master-0 kubenswrapper[30420]: I0318 10:20:59.825285 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 18 10:20:59.828852 master-0 kubenswrapper[30420]: I0318 10:20:59.825717 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 18 10:20:59.828852 master-0 kubenswrapper[30420]: I0318 10:20:59.825990 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 18 10:20:59.828852 master-0 kubenswrapper[30420]: I0318 10:20:59.826134 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 18 10:20:59.856842 master-0 kubenswrapper[30420]: I0318 10:20:59.850664 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2"] Mar 18 10:20:59.955844 master-0 kubenswrapper[30420]: I0318 10:20:59.954786 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85edf76a-b718-42ae-b899-54a0f53cf836-apiservice-cert\") pod \"metallb-operator-controller-manager-564bb7959-qgbm2\" (UID: \"85edf76a-b718-42ae-b899-54a0f53cf836\") " pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:20:59.955844 master-0 kubenswrapper[30420]: I0318 10:20:59.954860 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fp5n\" (UniqueName: \"kubernetes.io/projected/85edf76a-b718-42ae-b899-54a0f53cf836-kube-api-access-9fp5n\") pod \"metallb-operator-controller-manager-564bb7959-qgbm2\" (UID: \"85edf76a-b718-42ae-b899-54a0f53cf836\") " pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:20:59.955844 master-0 kubenswrapper[30420]: I0318 10:20:59.954925 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85edf76a-b718-42ae-b899-54a0f53cf836-webhook-cert\") pod \"metallb-operator-controller-manager-564bb7959-qgbm2\" (UID: \"85edf76a-b718-42ae-b899-54a0f53cf836\") " pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:00.062846 master-0 kubenswrapper[30420]: I0318 10:21:00.056863 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85edf76a-b718-42ae-b899-54a0f53cf836-webhook-cert\") pod \"metallb-operator-controller-manager-564bb7959-qgbm2\" (UID: \"85edf76a-b718-42ae-b899-54a0f53cf836\") " pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:00.062846 master-0 kubenswrapper[30420]: I0318 10:21:00.056974 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85edf76a-b718-42ae-b899-54a0f53cf836-apiservice-cert\") pod \"metallb-operator-controller-manager-564bb7959-qgbm2\" (UID: \"85edf76a-b718-42ae-b899-54a0f53cf836\") " pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:00.062846 master-0 kubenswrapper[30420]: I0318 10:21:00.057005 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fp5n\" (UniqueName: \"kubernetes.io/projected/85edf76a-b718-42ae-b899-54a0f53cf836-kube-api-access-9fp5n\") pod \"metallb-operator-controller-manager-564bb7959-qgbm2\" (UID: \"85edf76a-b718-42ae-b899-54a0f53cf836\") " pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:00.071878 master-0 kubenswrapper[30420]: I0318 10:21:00.069709 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85edf76a-b718-42ae-b899-54a0f53cf836-apiservice-cert\") pod \"metallb-operator-controller-manager-564bb7959-qgbm2\" (UID: \"85edf76a-b718-42ae-b899-54a0f53cf836\") " pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:00.078897 master-0 kubenswrapper[30420]: I0318 10:21:00.074542 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85edf76a-b718-42ae-b899-54a0f53cf836-webhook-cert\") pod \"metallb-operator-controller-manager-564bb7959-qgbm2\" (UID: \"85edf76a-b718-42ae-b899-54a0f53cf836\") " pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:00.166418 master-0 kubenswrapper[30420]: I0318 10:21:00.166352 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fp5n\" (UniqueName: \"kubernetes.io/projected/85edf76a-b718-42ae-b899-54a0f53cf836-kube-api-access-9fp5n\") pod \"metallb-operator-controller-manager-564bb7959-qgbm2\" (UID: \"85edf76a-b718-42ae-b899-54a0f53cf836\") " pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:00.198580 master-0 kubenswrapper[30420]: I0318 10:21:00.198483 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:01.065756 master-0 kubenswrapper[30420]: I0318 10:21:01.065699 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb"] Mar 18 10:21:01.067796 master-0 kubenswrapper[30420]: I0318 10:21:01.067771 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.069970 master-0 kubenswrapper[30420]: I0318 10:21:01.069887 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 18 10:21:01.070268 master-0 kubenswrapper[30420]: I0318 10:21:01.070230 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 10:21:01.098949 master-0 kubenswrapper[30420]: I0318 10:21:01.098877 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb6bm\" (UniqueName: \"kubernetes.io/projected/7972836b-9e15-4fdd-8408-e1ca80deaeef-kube-api-access-vb6bm\") pod \"metallb-operator-webhook-server-7bff698c48-vrvtb\" (UID: \"7972836b-9e15-4fdd-8408-e1ca80deaeef\") " pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.099213 master-0 kubenswrapper[30420]: I0318 10:21:01.099019 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7972836b-9e15-4fdd-8408-e1ca80deaeef-webhook-cert\") pod \"metallb-operator-webhook-server-7bff698c48-vrvtb\" (UID: \"7972836b-9e15-4fdd-8408-e1ca80deaeef\") " pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.099213 master-0 kubenswrapper[30420]: I0318 10:21:01.099057 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7972836b-9e15-4fdd-8408-e1ca80deaeef-apiservice-cert\") pod \"metallb-operator-webhook-server-7bff698c48-vrvtb\" (UID: \"7972836b-9e15-4fdd-8408-e1ca80deaeef\") " pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.200933 master-0 kubenswrapper[30420]: I0318 10:21:01.200862 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb6bm\" (UniqueName: \"kubernetes.io/projected/7972836b-9e15-4fdd-8408-e1ca80deaeef-kube-api-access-vb6bm\") pod \"metallb-operator-webhook-server-7bff698c48-vrvtb\" (UID: \"7972836b-9e15-4fdd-8408-e1ca80deaeef\") " pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.201210 master-0 kubenswrapper[30420]: I0318 10:21:01.201021 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7972836b-9e15-4fdd-8408-e1ca80deaeef-webhook-cert\") pod \"metallb-operator-webhook-server-7bff698c48-vrvtb\" (UID: \"7972836b-9e15-4fdd-8408-e1ca80deaeef\") " pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.201346 master-0 kubenswrapper[30420]: I0318 10:21:01.201272 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7972836b-9e15-4fdd-8408-e1ca80deaeef-apiservice-cert\") pod \"metallb-operator-webhook-server-7bff698c48-vrvtb\" (UID: \"7972836b-9e15-4fdd-8408-e1ca80deaeef\") " pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.206055 master-0 kubenswrapper[30420]: I0318 10:21:01.206013 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7972836b-9e15-4fdd-8408-e1ca80deaeef-webhook-cert\") pod \"metallb-operator-webhook-server-7bff698c48-vrvtb\" (UID: \"7972836b-9e15-4fdd-8408-e1ca80deaeef\") " pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.206236 master-0 kubenswrapper[30420]: I0318 10:21:01.206200 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7972836b-9e15-4fdd-8408-e1ca80deaeef-apiservice-cert\") pod \"metallb-operator-webhook-server-7bff698c48-vrvtb\" (UID: \"7972836b-9e15-4fdd-8408-e1ca80deaeef\") " pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.218313 master-0 kubenswrapper[30420]: I0318 10:21:01.218232 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb"] Mar 18 10:21:01.355529 master-0 kubenswrapper[30420]: I0318 10:21:01.354891 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb6bm\" (UniqueName: \"kubernetes.io/projected/7972836b-9e15-4fdd-8408-e1ca80deaeef-kube-api-access-vb6bm\") pod \"metallb-operator-webhook-server-7bff698c48-vrvtb\" (UID: \"7972836b-9e15-4fdd-8408-e1ca80deaeef\") " pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:01.388858 master-0 kubenswrapper[30420]: I0318 10:21:01.385261 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:02.064070 master-0 kubenswrapper[30420]: I0318 10:21:02.063232 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-zhsw7"] Mar 18 10:21:02.124645 master-0 kubenswrapper[30420]: I0318 10:21:02.124575 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" event={"ID":"2792fe14-2599-454d-9b93-0587ac7086bd","Type":"ContainerStarted","Data":"95957f2681c24ed8dffa40404e23e18779a02dd95b3c1fedfc52956f2bc8c03e"} Mar 18 10:21:02.125544 master-0 kubenswrapper[30420]: I0318 10:21:02.125515 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:21:02.126207 master-0 kubenswrapper[30420]: I0318 10:21:02.126153 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2"] Mar 18 10:21:02.130875 master-0 kubenswrapper[30420]: I0318 10:21:02.128509 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-7z65r" event={"ID":"9ac1f807-09b8-4fd1-be56-682238c80007","Type":"ContainerStarted","Data":"e382e0c2a60c0d635a24ca8a496fb403c9fda838f273e34f5ee9880f71bf2d8d"} Mar 18 10:21:02.142688 master-0 kubenswrapper[30420]: I0318 10:21:02.142593 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb"] Mar 18 10:21:02.145321 master-0 kubenswrapper[30420]: I0318 10:21:02.144412 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-zhsw7" event={"ID":"7d7e7ff4-332d-434b-9c84-c14686401897","Type":"ContainerStarted","Data":"d6fc694b1ee99b976f2717477066649dad749e5d5811248b44edccfd9e5da5e3"} Mar 18 10:21:02.180615 master-0 kubenswrapper[30420]: I0318 10:21:02.180542 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" podStartSLOduration=1.874884603 podStartE2EDuration="8.180521745s" podCreationTimestamp="2026-03-18 10:20:54 +0000 UTC" firstStartedPulling="2026-03-18 10:20:55.241966768 +0000 UTC m=+619.294712697" lastFinishedPulling="2026-03-18 10:21:01.54760392 +0000 UTC m=+625.600349839" observedRunningTime="2026-03-18 10:21:02.162488742 +0000 UTC m=+626.215234671" watchObservedRunningTime="2026-03-18 10:21:02.180521745 +0000 UTC m=+626.233267694" Mar 18 10:21:02.201911 master-0 kubenswrapper[30420]: I0318 10:21:02.201227 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" event={"ID":"4bbc2122-512f-4056-8572-80126bea4f0c","Type":"ContainerStarted","Data":"39ccae21119de9662b06cf54ea500fafc92eb7722587c375bfe1b9efb6d69492"} Mar 18 10:21:02.257894 master-0 kubenswrapper[30420]: I0318 10:21:02.257815 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-7z65r" podStartSLOduration=2.483449471 podStartE2EDuration="10.257800785s" podCreationTimestamp="2026-03-18 10:20:52 +0000 UTC" firstStartedPulling="2026-03-18 10:20:53.723097787 +0000 UTC m=+617.775843716" lastFinishedPulling="2026-03-18 10:21:01.497449101 +0000 UTC m=+625.550195030" observedRunningTime="2026-03-18 10:21:02.197971833 +0000 UTC m=+626.250717762" watchObservedRunningTime="2026-03-18 10:21:02.257800785 +0000 UTC m=+626.310546714" Mar 18 10:21:02.265801 master-0 kubenswrapper[30420]: I0318 10:21:02.265736 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-4zmkn" podStartSLOduration=2.444811692 podStartE2EDuration="11.265716973s" podCreationTimestamp="2026-03-18 10:20:51 +0000 UTC" firstStartedPulling="2026-03-18 10:20:52.726726609 +0000 UTC m=+616.779472538" lastFinishedPulling="2026-03-18 10:21:01.54763189 +0000 UTC m=+625.600377819" observedRunningTime="2026-03-18 10:21:02.25440737 +0000 UTC m=+626.307153289" watchObservedRunningTime="2026-03-18 10:21:02.265716973 +0000 UTC m=+626.318462902" Mar 18 10:21:03.184020 master-0 kubenswrapper[30420]: I0318 10:21:03.183965 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" event={"ID":"7972836b-9e15-4fdd-8408-e1ca80deaeef","Type":"ContainerStarted","Data":"e5e062f73f13300d2a54526650c02172e8f41ddff23587b2721a87120fb0f282"} Mar 18 10:21:03.187209 master-0 kubenswrapper[30420]: I0318 10:21:03.187157 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-zhsw7" event={"ID":"7d7e7ff4-332d-434b-9c84-c14686401897","Type":"ContainerStarted","Data":"e9f6451133677dcf3218faf80c38ed5596fab3d6a245ca075301b097a09a3cb9"} Mar 18 10:21:03.188622 master-0 kubenswrapper[30420]: I0318 10:21:03.188584 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" event={"ID":"85edf76a-b718-42ae-b899-54a0f53cf836","Type":"ContainerStarted","Data":"67ba739b6fbcb0e9b1370c08ca8ee43901d1c1ee7621d768478bfd01a6f9c7e1"} Mar 18 10:21:03.252548 master-0 kubenswrapper[30420]: I0318 10:21:03.252476 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-zhsw7" podStartSLOduration=6.252452029 podStartE2EDuration="6.252452029s" podCreationTimestamp="2026-03-18 10:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:21:03.246778237 +0000 UTC m=+627.299524166" watchObservedRunningTime="2026-03-18 10:21:03.252452029 +0000 UTC m=+627.305197958" Mar 18 10:21:08.747036 master-0 kubenswrapper[30420]: I0318 10:21:08.746974 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm"] Mar 18 10:21:08.748952 master-0 kubenswrapper[30420]: I0318 10:21:08.748925 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm" Mar 18 10:21:08.758850 master-0 kubenswrapper[30420]: I0318 10:21:08.758374 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 18 10:21:08.758850 master-0 kubenswrapper[30420]: I0318 10:21:08.758687 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 18 10:21:08.763795 master-0 kubenswrapper[30420]: I0318 10:21:08.759716 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm"] Mar 18 10:21:08.803990 master-0 kubenswrapper[30420]: I0318 10:21:08.794801 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs97r\" (UniqueName: \"kubernetes.io/projected/afdba306-7371-4b95-aaf9-9398417e1b12-kube-api-access-vs97r\") pod \"obo-prometheus-operator-8ff7d675-vh7xm\" (UID: \"afdba306-7371-4b95-aaf9-9398417e1b12\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm" Mar 18 10:21:08.897020 master-0 kubenswrapper[30420]: I0318 10:21:08.896634 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs97r\" (UniqueName: \"kubernetes.io/projected/afdba306-7371-4b95-aaf9-9398417e1b12-kube-api-access-vs97r\") pod \"obo-prometheus-operator-8ff7d675-vh7xm\" (UID: \"afdba306-7371-4b95-aaf9-9398417e1b12\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm" Mar 18 10:21:08.931903 master-0 kubenswrapper[30420]: I0318 10:21:08.931790 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs97r\" (UniqueName: \"kubernetes.io/projected/afdba306-7371-4b95-aaf9-9398417e1b12-kube-api-access-vs97r\") pod \"obo-prometheus-operator-8ff7d675-vh7xm\" (UID: \"afdba306-7371-4b95-aaf9-9398417e1b12\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm" Mar 18 10:21:09.079671 master-0 kubenswrapper[30420]: I0318 10:21:09.079610 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm" Mar 18 10:21:09.150654 master-0 kubenswrapper[30420]: I0318 10:21:09.150593 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr"] Mar 18 10:21:09.151953 master-0 kubenswrapper[30420]: I0318 10:21:09.151919 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" Mar 18 10:21:09.154445 master-0 kubenswrapper[30420]: I0318 10:21:09.154415 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 18 10:21:09.171444 master-0 kubenswrapper[30420]: I0318 10:21:09.171343 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p"] Mar 18 10:21:09.172988 master-0 kubenswrapper[30420]: I0318 10:21:09.172304 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" Mar 18 10:21:09.179188 master-0 kubenswrapper[30420]: I0318 10:21:09.178451 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr"] Mar 18 10:21:09.201966 master-0 kubenswrapper[30420]: I0318 10:21:09.201903 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1dd8046-1f18-404b-87df-00c917d1fdc2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p\" (UID: \"a1dd8046-1f18-404b-87df-00c917d1fdc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" Mar 18 10:21:09.202189 master-0 kubenswrapper[30420]: I0318 10:21:09.201987 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62cbc290-158f-4399-aeb5-a97661aca61d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-czjbr\" (UID: \"62cbc290-158f-4399-aeb5-a97661aca61d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" Mar 18 10:21:09.202189 master-0 kubenswrapper[30420]: I0318 10:21:09.202036 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62cbc290-158f-4399-aeb5-a97661aca61d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-czjbr\" (UID: \"62cbc290-158f-4399-aeb5-a97661aca61d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" Mar 18 10:21:09.202189 master-0 kubenswrapper[30420]: I0318 10:21:09.202097 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1dd8046-1f18-404b-87df-00c917d1fdc2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p\" (UID: \"a1dd8046-1f18-404b-87df-00c917d1fdc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" Mar 18 10:21:09.230229 master-0 kubenswrapper[30420]: I0318 10:21:09.229780 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p"] Mar 18 10:21:09.305638 master-0 kubenswrapper[30420]: I0318 10:21:09.304987 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62cbc290-158f-4399-aeb5-a97661aca61d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-czjbr\" (UID: \"62cbc290-158f-4399-aeb5-a97661aca61d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" Mar 18 10:21:09.306163 master-0 kubenswrapper[30420]: I0318 10:21:09.306141 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1dd8046-1f18-404b-87df-00c917d1fdc2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p\" (UID: \"a1dd8046-1f18-404b-87df-00c917d1fdc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" Mar 18 10:21:09.306384 master-0 kubenswrapper[30420]: I0318 10:21:09.306341 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1dd8046-1f18-404b-87df-00c917d1fdc2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p\" (UID: \"a1dd8046-1f18-404b-87df-00c917d1fdc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" Mar 18 10:21:09.306562 master-0 kubenswrapper[30420]: I0318 10:21:09.306525 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62cbc290-158f-4399-aeb5-a97661aca61d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-czjbr\" (UID: \"62cbc290-158f-4399-aeb5-a97661aca61d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" Mar 18 10:21:09.308752 master-0 kubenswrapper[30420]: I0318 10:21:09.308711 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62cbc290-158f-4399-aeb5-a97661aca61d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-czjbr\" (UID: \"62cbc290-158f-4399-aeb5-a97661aca61d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" Mar 18 10:21:09.310047 master-0 kubenswrapper[30420]: I0318 10:21:09.310008 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1dd8046-1f18-404b-87df-00c917d1fdc2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p\" (UID: \"a1dd8046-1f18-404b-87df-00c917d1fdc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" Mar 18 10:21:09.315918 master-0 kubenswrapper[30420]: I0318 10:21:09.313250 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62cbc290-158f-4399-aeb5-a97661aca61d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-czjbr\" (UID: \"62cbc290-158f-4399-aeb5-a97661aca61d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" Mar 18 10:21:09.317561 master-0 kubenswrapper[30420]: I0318 10:21:09.317516 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1dd8046-1f18-404b-87df-00c917d1fdc2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p\" (UID: \"a1dd8046-1f18-404b-87df-00c917d1fdc2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" Mar 18 10:21:09.480450 master-0 kubenswrapper[30420]: I0318 10:21:09.480281 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" Mar 18 10:21:09.510076 master-0 kubenswrapper[30420]: I0318 10:21:09.509988 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" Mar 18 10:21:09.728379 master-0 kubenswrapper[30420]: I0318 10:21:09.728319 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-2csj8"] Mar 18 10:21:09.730833 master-0 kubenswrapper[30420]: I0318 10:21:09.730755 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:09.733864 master-0 kubenswrapper[30420]: I0318 10:21:09.733388 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 18 10:21:09.748571 master-0 kubenswrapper[30420]: I0318 10:21:09.748380 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-29ng5" Mar 18 10:21:09.761791 master-0 kubenswrapper[30420]: I0318 10:21:09.761727 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-2csj8"] Mar 18 10:21:09.814694 master-0 kubenswrapper[30420]: I0318 10:21:09.814632 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xctkc\" (UniqueName: \"kubernetes.io/projected/32cdee06-2791-4cac-9447-26fee189be3f-kube-api-access-xctkc\") pod \"observability-operator-6dd7dd855f-2csj8\" (UID: \"32cdee06-2791-4cac-9447-26fee189be3f\") " pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:09.814980 master-0 kubenswrapper[30420]: I0318 10:21:09.814772 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/32cdee06-2791-4cac-9447-26fee189be3f-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-2csj8\" (UID: \"32cdee06-2791-4cac-9447-26fee189be3f\") " pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:09.916387 master-0 kubenswrapper[30420]: I0318 10:21:09.916312 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xctkc\" (UniqueName: \"kubernetes.io/projected/32cdee06-2791-4cac-9447-26fee189be3f-kube-api-access-xctkc\") pod \"observability-operator-6dd7dd855f-2csj8\" (UID: \"32cdee06-2791-4cac-9447-26fee189be3f\") " pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:09.916631 master-0 kubenswrapper[30420]: I0318 10:21:09.916504 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/32cdee06-2791-4cac-9447-26fee189be3f-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-2csj8\" (UID: \"32cdee06-2791-4cac-9447-26fee189be3f\") " pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:09.921149 master-0 kubenswrapper[30420]: I0318 10:21:09.921116 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/32cdee06-2791-4cac-9447-26fee189be3f-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-2csj8\" (UID: \"32cdee06-2791-4cac-9447-26fee189be3f\") " pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:10.174931 master-0 kubenswrapper[30420]: I0318 10:21:10.174867 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xctkc\" (UniqueName: \"kubernetes.io/projected/32cdee06-2791-4cac-9447-26fee189be3f-kube-api-access-xctkc\") pod \"observability-operator-6dd7dd855f-2csj8\" (UID: \"32cdee06-2791-4cac-9447-26fee189be3f\") " pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:10.362847 master-0 kubenswrapper[30420]: I0318 10:21:10.350652 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:10.524918 master-0 kubenswrapper[30420]: I0318 10:21:10.524147 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-646786b97b-ngcvz"] Mar 18 10:21:10.525135 master-0 kubenswrapper[30420]: I0318 10:21:10.525055 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.531845 master-0 kubenswrapper[30420]: I0318 10:21:10.528006 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 18 10:21:10.578377 master-0 kubenswrapper[30420]: I0318 10:21:10.572724 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-646786b97b-ngcvz"] Mar 18 10:21:10.632516 master-0 kubenswrapper[30420]: I0318 10:21:10.632147 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32248d17-01fa-4580-90a9-1cff5b20cb66-apiservice-cert\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.632516 master-0 kubenswrapper[30420]: I0318 10:21:10.632294 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/32248d17-01fa-4580-90a9-1cff5b20cb66-openshift-service-ca\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.632516 master-0 kubenswrapper[30420]: I0318 10:21:10.632331 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bztsj\" (UniqueName: \"kubernetes.io/projected/32248d17-01fa-4580-90a9-1cff5b20cb66-kube-api-access-bztsj\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.632516 master-0 kubenswrapper[30420]: I0318 10:21:10.632367 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32248d17-01fa-4580-90a9-1cff5b20cb66-webhook-cert\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.734369 master-0 kubenswrapper[30420]: I0318 10:21:10.734244 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32248d17-01fa-4580-90a9-1cff5b20cb66-apiservice-cert\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.734369 master-0 kubenswrapper[30420]: I0318 10:21:10.734341 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/32248d17-01fa-4580-90a9-1cff5b20cb66-openshift-service-ca\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.734369 master-0 kubenswrapper[30420]: I0318 10:21:10.734370 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bztsj\" (UniqueName: \"kubernetes.io/projected/32248d17-01fa-4580-90a9-1cff5b20cb66-kube-api-access-bztsj\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.734656 master-0 kubenswrapper[30420]: I0318 10:21:10.734391 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32248d17-01fa-4580-90a9-1cff5b20cb66-webhook-cert\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.735807 master-0 kubenswrapper[30420]: I0318 10:21:10.735766 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/32248d17-01fa-4580-90a9-1cff5b20cb66-openshift-service-ca\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.739375 master-0 kubenswrapper[30420]: I0318 10:21:10.739324 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32248d17-01fa-4580-90a9-1cff5b20cb66-webhook-cert\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.741253 master-0 kubenswrapper[30420]: I0318 10:21:10.741200 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32248d17-01fa-4580-90a9-1cff5b20cb66-apiservice-cert\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.860148 master-0 kubenswrapper[30420]: I0318 10:21:10.860009 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bztsj\" (UniqueName: \"kubernetes.io/projected/32248d17-01fa-4580-90a9-1cff5b20cb66-kube-api-access-bztsj\") pod \"perses-operator-646786b97b-ngcvz\" (UID: \"32248d17-01fa-4580-90a9-1cff5b20cb66\") " pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.941190 master-0 kubenswrapper[30420]: I0318 10:21:10.939329 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:10.979223 master-0 kubenswrapper[30420]: I0318 10:21:10.979175 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm"] Mar 18 10:21:11.136322 master-0 kubenswrapper[30420]: I0318 10:21:11.135599 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr"] Mar 18 10:21:11.148884 master-0 kubenswrapper[30420]: I0318 10:21:11.143504 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p"] Mar 18 10:21:11.257453 master-0 kubenswrapper[30420]: I0318 10:21:11.233317 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-2csj8"] Mar 18 10:21:11.363943 master-0 kubenswrapper[30420]: I0318 10:21:11.359617 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm" event={"ID":"afdba306-7371-4b95-aaf9-9398417e1b12","Type":"ContainerStarted","Data":"496896192b7b9d148ffa8736fa90aabb7076ead0b101944b590e52019e3f02ca"} Mar 18 10:21:11.365154 master-0 kubenswrapper[30420]: I0318 10:21:11.365103 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" event={"ID":"7972836b-9e15-4fdd-8408-e1ca80deaeef","Type":"ContainerStarted","Data":"665a2d0033988279c7feae3c4e0f43faf90ccc7f49c19548b4e18dc60588acb1"} Mar 18 10:21:11.365893 master-0 kubenswrapper[30420]: I0318 10:21:11.365859 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:11.367063 master-0 kubenswrapper[30420]: I0318 10:21:11.367029 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" event={"ID":"a1dd8046-1f18-404b-87df-00c917d1fdc2","Type":"ContainerStarted","Data":"f91097206cfa0cbf9240f73ab7191946d6caaf3c331b31efe2b5afbd5300db35"} Mar 18 10:21:11.368641 master-0 kubenswrapper[30420]: I0318 10:21:11.368602 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" event={"ID":"62cbc290-158f-4399-aeb5-a97661aca61d","Type":"ContainerStarted","Data":"4c593ac0a47ea6b6566a2bc7f316be14d0f4f0f8b199d67e4953f6449c1cdfba"} Mar 18 10:21:11.369600 master-0 kubenswrapper[30420]: I0318 10:21:11.369561 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" event={"ID":"32cdee06-2791-4cac-9447-26fee189be3f","Type":"ContainerStarted","Data":"837de03940afc5d276edc61f0fbcf970b2965d457958fed8830c41561c7c2e13"} Mar 18 10:21:11.372951 master-0 kubenswrapper[30420]: I0318 10:21:11.372685 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" event={"ID":"85edf76a-b718-42ae-b899-54a0f53cf836","Type":"ContainerStarted","Data":"ce8eb267457c69e3c8b224839f21921280b981cbc2540d24002c4aea81c3c95b"} Mar 18 10:21:11.373058 master-0 kubenswrapper[30420]: I0318 10:21:11.372963 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:11.397500 master-0 kubenswrapper[30420]: I0318 10:21:11.394745 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" podStartSLOduration=3.22739175 podStartE2EDuration="11.394726389s" podCreationTimestamp="2026-03-18 10:21:00 +0000 UTC" firstStartedPulling="2026-03-18 10:21:02.149801984 +0000 UTC m=+626.202547913" lastFinishedPulling="2026-03-18 10:21:10.317136633 +0000 UTC m=+634.369882552" observedRunningTime="2026-03-18 10:21:11.393361395 +0000 UTC m=+635.446107324" watchObservedRunningTime="2026-03-18 10:21:11.394726389 +0000 UTC m=+635.447472318" Mar 18 10:21:11.436905 master-0 kubenswrapper[30420]: W0318 10:21:11.434371 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32248d17_01fa_4580_90a9_1cff5b20cb66.slice/crio-54fbebb186ae880a247dd5f35d0f80a7d7e5c040d0490f8cae959b0feae6a3a6 WatchSource:0}: Error finding container 54fbebb186ae880a247dd5f35d0f80a7d7e5c040d0490f8cae959b0feae6a3a6: Status 404 returned error can't find the container with id 54fbebb186ae880a247dd5f35d0f80a7d7e5c040d0490f8cae959b0feae6a3a6 Mar 18 10:21:11.438327 master-0 kubenswrapper[30420]: I0318 10:21:11.438241 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-646786b97b-ngcvz"] Mar 18 10:21:11.443335 master-0 kubenswrapper[30420]: I0318 10:21:11.443271 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" podStartSLOduration=4.255118636 podStartE2EDuration="12.443249807s" podCreationTimestamp="2026-03-18 10:20:59 +0000 UTC" firstStartedPulling="2026-03-18 10:21:02.143889636 +0000 UTC m=+626.196635555" lastFinishedPulling="2026-03-18 10:21:10.332020797 +0000 UTC m=+634.384766726" observedRunningTime="2026-03-18 10:21:11.423235715 +0000 UTC m=+635.475981644" watchObservedRunningTime="2026-03-18 10:21:11.443249807 +0000 UTC m=+635.495995736" Mar 18 10:21:12.383347 master-0 kubenswrapper[30420]: I0318 10:21:12.383191 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-646786b97b-ngcvz" event={"ID":"32248d17-01fa-4580-90a9-1cff5b20cb66","Type":"ContainerStarted","Data":"54fbebb186ae880a247dd5f35d0f80a7d7e5c040d0490f8cae959b0feae6a3a6"} Mar 18 10:21:21.390281 master-0 kubenswrapper[30420]: I0318 10:21:21.390084 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7bff698c48-vrvtb" Mar 18 10:21:23.559844 master-0 kubenswrapper[30420]: I0318 10:21:23.559242 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm" event={"ID":"afdba306-7371-4b95-aaf9-9398417e1b12","Type":"ContainerStarted","Data":"9b2e8d990354ee757bbf57397995e8813842ba34f72775e9afd448b65cc6b6ff"} Mar 18 10:21:23.570251 master-0 kubenswrapper[30420]: I0318 10:21:23.568227 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" event={"ID":"a1dd8046-1f18-404b-87df-00c917d1fdc2","Type":"ContainerStarted","Data":"28d30cfb927153e7e41e958e7faf6eaeb1e1d791bec537cd39a9808f1e06db27"} Mar 18 10:21:23.573837 master-0 kubenswrapper[30420]: I0318 10:21:23.573091 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" event={"ID":"62cbc290-158f-4399-aeb5-a97661aca61d","Type":"ContainerStarted","Data":"3d89c5f1a6e32d897eafb239c53826e3587f50b511cbb4d27a52d98ea5c53676"} Mar 18 10:21:23.583848 master-0 kubenswrapper[30420]: I0318 10:21:23.583057 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" event={"ID":"32cdee06-2791-4cac-9447-26fee189be3f","Type":"ContainerStarted","Data":"aac300b4bcbc4e6951fd9e9e53fc2ae03f32426bfe012aa0513061133684dcf1"} Mar 18 10:21:23.583848 master-0 kubenswrapper[30420]: I0318 10:21:23.583274 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:23.587846 master-0 kubenswrapper[30420]: I0318 10:21:23.584756 30420 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-2csj8 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.132:8081/healthz\": dial tcp 10.128.0.132:8081: connect: connection refused" start-of-body= Mar 18 10:21:23.587846 master-0 kubenswrapper[30420]: I0318 10:21:23.584838 30420 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" podUID="32cdee06-2791-4cac-9447-26fee189be3f" containerName="operator" probeResult="failure" output="Get \"http://10.128.0.132:8081/healthz\": dial tcp 10.128.0.132:8081: connect: connection refused" Mar 18 10:21:23.587846 master-0 kubenswrapper[30420]: I0318 10:21:23.585233 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-646786b97b-ngcvz" event={"ID":"32248d17-01fa-4580-90a9-1cff5b20cb66","Type":"ContainerStarted","Data":"c6a99c83859af8c8480604eabeb0b8fde76b900b847f47e872a93631921924c4"} Mar 18 10:21:23.587846 master-0 kubenswrapper[30420]: I0318 10:21:23.585970 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:23.673713 master-0 kubenswrapper[30420]: I0318 10:21:23.673560 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-vh7xm" podStartSLOduration=3.50234798 podStartE2EDuration="15.673540967s" podCreationTimestamp="2026-03-18 10:21:08 +0000 UTC" firstStartedPulling="2026-03-18 10:21:10.979912498 +0000 UTC m=+635.032658427" lastFinishedPulling="2026-03-18 10:21:23.151105485 +0000 UTC m=+647.203851414" observedRunningTime="2026-03-18 10:21:23.625126872 +0000 UTC m=+647.677872801" watchObservedRunningTime="2026-03-18 10:21:23.673540967 +0000 UTC m=+647.726286896" Mar 18 10:21:23.677611 master-0 kubenswrapper[30420]: I0318 10:21:23.677550 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-czjbr" podStartSLOduration=2.678549672 podStartE2EDuration="14.677538407s" podCreationTimestamp="2026-03-18 10:21:09 +0000 UTC" firstStartedPulling="2026-03-18 10:21:11.186036021 +0000 UTC m=+635.238781950" lastFinishedPulling="2026-03-18 10:21:23.185024756 +0000 UTC m=+647.237770685" observedRunningTime="2026-03-18 10:21:23.672693676 +0000 UTC m=+647.725439605" watchObservedRunningTime="2026-03-18 10:21:23.677538407 +0000 UTC m=+647.730284336" Mar 18 10:21:23.748257 master-0 kubenswrapper[30420]: I0318 10:21:23.748095 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p" podStartSLOduration=2.774787109 podStartE2EDuration="14.748074218s" podCreationTimestamp="2026-03-18 10:21:09 +0000 UTC" firstStartedPulling="2026-03-18 10:21:11.185765635 +0000 UTC m=+635.238511564" lastFinishedPulling="2026-03-18 10:21:23.159052734 +0000 UTC m=+647.211798673" observedRunningTime="2026-03-18 10:21:23.741431051 +0000 UTC m=+647.794176970" watchObservedRunningTime="2026-03-18 10:21:23.748074218 +0000 UTC m=+647.800820147" Mar 18 10:21:23.867617 master-0 kubenswrapper[30420]: I0318 10:21:23.867548 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" podStartSLOduration=2.974757368 podStartE2EDuration="14.867533676s" podCreationTimestamp="2026-03-18 10:21:09 +0000 UTC" firstStartedPulling="2026-03-18 10:21:11.292298689 +0000 UTC m=+635.345044608" lastFinishedPulling="2026-03-18 10:21:23.185074977 +0000 UTC m=+647.237820916" observedRunningTime="2026-03-18 10:21:23.820099686 +0000 UTC m=+647.872845615" watchObservedRunningTime="2026-03-18 10:21:23.867533676 +0000 UTC m=+647.920279605" Mar 18 10:21:23.870691 master-0 kubenswrapper[30420]: I0318 10:21:23.870639 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-646786b97b-ngcvz" podStartSLOduration=2.183419535 podStartE2EDuration="13.870628324s" podCreationTimestamp="2026-03-18 10:21:10 +0000 UTC" firstStartedPulling="2026-03-18 10:21:11.43898496 +0000 UTC m=+635.491730889" lastFinishedPulling="2026-03-18 10:21:23.126193749 +0000 UTC m=+647.178939678" observedRunningTime="2026-03-18 10:21:23.86531481 +0000 UTC m=+647.918060739" watchObservedRunningTime="2026-03-18 10:21:23.870628324 +0000 UTC m=+647.923374253" Mar 18 10:21:24.595436 master-0 kubenswrapper[30420]: I0318 10:21:24.595396 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-2csj8" Mar 18 10:21:30.942730 master-0 kubenswrapper[30420]: I0318 10:21:30.942662 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-646786b97b-ngcvz" Mar 18 10:21:40.201350 master-0 kubenswrapper[30420]: I0318 10:21:40.201277 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-564bb7959-qgbm2" Mar 18 10:21:49.914860 master-0 kubenswrapper[30420]: I0318 10:21:49.913543 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb"] Mar 18 10:21:49.914860 master-0 kubenswrapper[30420]: I0318 10:21:49.914732 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:21:49.917975 master-0 kubenswrapper[30420]: I0318 10:21:49.916871 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 18 10:21:49.925925 master-0 kubenswrapper[30420]: I0318 10:21:49.925852 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jr8gm"] Mar 18 10:21:49.932915 master-0 kubenswrapper[30420]: I0318 10:21:49.932481 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb"] Mar 18 10:21:49.932915 master-0 kubenswrapper[30420]: I0318 10:21:49.932623 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:49.937726 master-0 kubenswrapper[30420]: I0318 10:21:49.936687 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 18 10:21:49.937726 master-0 kubenswrapper[30420]: I0318 10:21:49.936929 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 18 10:21:50.034002 master-0 kubenswrapper[30420]: I0318 10:21:50.033721 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nv5ns"] Mar 18 10:21:50.039605 master-0 kubenswrapper[30420]: I0318 10:21:50.039521 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.043080 master-0 kubenswrapper[30420]: I0318 10:21:50.043027 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 18 10:21:50.043243 master-0 kubenswrapper[30420]: I0318 10:21:50.043193 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046401 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046615 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2011b02b-e4a7-43ac-af50-d30a48d38b1b-metrics-certs\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046674 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-frr-sockets\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046704 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-reloader\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046723 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-metrics\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046752 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0223b511-6041-4268-9c8a-079924b86793-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqcwb\" (UID: \"0223b511-6041-4268-9c8a-079924b86793\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046773 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-frr-conf\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046808 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495m9\" (UniqueName: \"kubernetes.io/projected/2011b02b-e4a7-43ac-af50-d30a48d38b1b-kube-api-access-495m9\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046857 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2011b02b-e4a7-43ac-af50-d30a48d38b1b-frr-startup\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.047444 master-0 kubenswrapper[30420]: I0318 10:21:50.046878 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmmq5\" (UniqueName: \"kubernetes.io/projected/0223b511-6041-4268-9c8a-079924b86793-kube-api-access-fmmq5\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqcwb\" (UID: \"0223b511-6041-4268-9c8a-079924b86793\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:21:50.058527 master-0 kubenswrapper[30420]: I0318 10:21:50.058458 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-w62wg"] Mar 18 10:21:50.061333 master-0 kubenswrapper[30420]: I0318 10:21:50.059872 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.068662 master-0 kubenswrapper[30420]: I0318 10:21:50.068586 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-w62wg"] Mar 18 10:21:50.072915 master-0 kubenswrapper[30420]: I0318 10:21:50.072842 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 18 10:21:50.148457 master-0 kubenswrapper[30420]: I0318 10:21:50.148343 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-frr-sockets\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.148457 master-0 kubenswrapper[30420]: I0318 10:21:50.148426 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-reloader\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.148743 master-0 kubenswrapper[30420]: I0318 10:21:50.148598 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-metrics\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.148779 master-0 kubenswrapper[30420]: I0318 10:21:50.148737 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0223b511-6041-4268-9c8a-079924b86793-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqcwb\" (UID: \"0223b511-6041-4268-9c8a-079924b86793\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:21:50.148992 master-0 kubenswrapper[30420]: I0318 10:21:50.148933 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-frr-sockets\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.148992 master-0 kubenswrapper[30420]: I0318 10:21:50.148977 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-reloader\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.149141 master-0 kubenswrapper[30420]: I0318 10:21:50.149001 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df659e73-45f8-4601-a966-c2de80fd6ba2-cert\") pod \"controller-7bb4cc7c98-w62wg\" (UID: \"df659e73-45f8-4601-a966-c2de80fd6ba2\") " pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.149141 master-0 kubenswrapper[30420]: I0318 10:21:50.149062 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-memberlist\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.149141 master-0 kubenswrapper[30420]: I0318 10:21:50.149090 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-frr-conf\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.149141 master-0 kubenswrapper[30420]: I0318 10:21:50.149112 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-metrics-certs\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.149141 master-0 kubenswrapper[30420]: I0318 10:21:50.149137 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df659e73-45f8-4601-a966-c2de80fd6ba2-metrics-certs\") pod \"controller-7bb4cc7c98-w62wg\" (UID: \"df659e73-45f8-4601-a966-c2de80fd6ba2\") " pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.149293 master-0 kubenswrapper[30420]: I0318 10:21:50.149168 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzl6x\" (UniqueName: \"kubernetes.io/projected/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-kube-api-access-hzl6x\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.149293 master-0 kubenswrapper[30420]: I0318 10:21:50.149194 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-495m9\" (UniqueName: \"kubernetes.io/projected/2011b02b-e4a7-43ac-af50-d30a48d38b1b-kube-api-access-495m9\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.149293 master-0 kubenswrapper[30420]: I0318 10:21:50.149214 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2011b02b-e4a7-43ac-af50-d30a48d38b1b-frr-startup\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.149293 master-0 kubenswrapper[30420]: I0318 10:21:50.149235 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmmq5\" (UniqueName: \"kubernetes.io/projected/0223b511-6041-4268-9c8a-079924b86793-kube-api-access-fmmq5\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqcwb\" (UID: \"0223b511-6041-4268-9c8a-079924b86793\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:21:50.149293 master-0 kubenswrapper[30420]: I0318 10:21:50.149267 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbt9k\" (UniqueName: \"kubernetes.io/projected/df659e73-45f8-4601-a966-c2de80fd6ba2-kube-api-access-zbt9k\") pod \"controller-7bb4cc7c98-w62wg\" (UID: \"df659e73-45f8-4601-a966-c2de80fd6ba2\") " pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.149293 master-0 kubenswrapper[30420]: I0318 10:21:50.149292 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-metallb-excludel2\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.149483 master-0 kubenswrapper[30420]: I0318 10:21:50.149307 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2011b02b-e4a7-43ac-af50-d30a48d38b1b-metrics-certs\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.149483 master-0 kubenswrapper[30420]: I0318 10:21:50.149321 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-metrics\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.149947 master-0 kubenswrapper[30420]: I0318 10:21:50.149621 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2011b02b-e4a7-43ac-af50-d30a48d38b1b-frr-conf\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.151240 master-0 kubenswrapper[30420]: I0318 10:21:50.151202 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2011b02b-e4a7-43ac-af50-d30a48d38b1b-frr-startup\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.155286 master-0 kubenswrapper[30420]: I0318 10:21:50.155263 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2011b02b-e4a7-43ac-af50-d30a48d38b1b-metrics-certs\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.156352 master-0 kubenswrapper[30420]: I0318 10:21:50.156312 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0223b511-6041-4268-9c8a-079924b86793-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqcwb\" (UID: \"0223b511-6041-4268-9c8a-079924b86793\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:21:50.167021 master-0 kubenswrapper[30420]: I0318 10:21:50.166949 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-495m9\" (UniqueName: \"kubernetes.io/projected/2011b02b-e4a7-43ac-af50-d30a48d38b1b-kube-api-access-495m9\") pod \"frr-k8s-jr8gm\" (UID: \"2011b02b-e4a7-43ac-af50-d30a48d38b1b\") " pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.168191 master-0 kubenswrapper[30420]: I0318 10:21:50.168170 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmmq5\" (UniqueName: \"kubernetes.io/projected/0223b511-6041-4268-9c8a-079924b86793-kube-api-access-fmmq5\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqcwb\" (UID: \"0223b511-6041-4268-9c8a-079924b86793\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:21:50.250939 master-0 kubenswrapper[30420]: I0318 10:21:50.250890 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbt9k\" (UniqueName: \"kubernetes.io/projected/df659e73-45f8-4601-a966-c2de80fd6ba2-kube-api-access-zbt9k\") pod \"controller-7bb4cc7c98-w62wg\" (UID: \"df659e73-45f8-4601-a966-c2de80fd6ba2\") " pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.251283 master-0 kubenswrapper[30420]: I0318 10:21:50.250955 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-metallb-excludel2\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.251651 master-0 kubenswrapper[30420]: I0318 10:21:50.251620 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df659e73-45f8-4601-a966-c2de80fd6ba2-cert\") pod \"controller-7bb4cc7c98-w62wg\" (UID: \"df659e73-45f8-4601-a966-c2de80fd6ba2\") " pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.251704 master-0 kubenswrapper[30420]: I0318 10:21:50.251652 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-memberlist\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.251704 master-0 kubenswrapper[30420]: I0318 10:21:50.251670 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-metrics-certs\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.251704 master-0 kubenswrapper[30420]: I0318 10:21:50.251693 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df659e73-45f8-4601-a966-c2de80fd6ba2-metrics-certs\") pod \"controller-7bb4cc7c98-w62wg\" (UID: \"df659e73-45f8-4601-a966-c2de80fd6ba2\") " pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.251852 master-0 kubenswrapper[30420]: I0318 10:21:50.251741 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzl6x\" (UniqueName: \"kubernetes.io/projected/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-kube-api-access-hzl6x\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.251852 master-0 kubenswrapper[30420]: E0318 10:21:50.251759 30420 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 10:21:50.251852 master-0 kubenswrapper[30420]: E0318 10:21:50.251811 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-memberlist podName:9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3 nodeName:}" failed. No retries permitted until 2026-03-18 10:21:50.751791873 +0000 UTC m=+674.804537802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-memberlist") pod "speaker-nv5ns" (UID: "9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3") : secret "metallb-memberlist" not found Mar 18 10:21:50.252381 master-0 kubenswrapper[30420]: I0318 10:21:50.252345 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-metallb-excludel2\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.253358 master-0 kubenswrapper[30420]: I0318 10:21:50.253335 30420 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 10:21:50.255361 master-0 kubenswrapper[30420]: I0318 10:21:50.255330 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-metrics-certs\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.255524 master-0 kubenswrapper[30420]: I0318 10:21:50.255370 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:21:50.255666 master-0 kubenswrapper[30420]: I0318 10:21:50.255618 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df659e73-45f8-4601-a966-c2de80fd6ba2-metrics-certs\") pod \"controller-7bb4cc7c98-w62wg\" (UID: \"df659e73-45f8-4601-a966-c2de80fd6ba2\") " pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.266678 master-0 kubenswrapper[30420]: I0318 10:21:50.266630 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df659e73-45f8-4601-a966-c2de80fd6ba2-cert\") pod \"controller-7bb4cc7c98-w62wg\" (UID: \"df659e73-45f8-4601-a966-c2de80fd6ba2\") " pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.267951 master-0 kubenswrapper[30420]: I0318 10:21:50.267919 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbt9k\" (UniqueName: \"kubernetes.io/projected/df659e73-45f8-4601-a966-c2de80fd6ba2-kube-api-access-zbt9k\") pod \"controller-7bb4cc7c98-w62wg\" (UID: \"df659e73-45f8-4601-a966-c2de80fd6ba2\") " pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.268610 master-0 kubenswrapper[30420]: I0318 10:21:50.268583 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzl6x\" (UniqueName: \"kubernetes.io/projected/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-kube-api-access-hzl6x\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.278677 master-0 kubenswrapper[30420]: I0318 10:21:50.278634 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:21:50.411388 master-0 kubenswrapper[30420]: I0318 10:21:50.411337 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:50.664597 master-0 kubenswrapper[30420]: I0318 10:21:50.664536 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb"] Mar 18 10:21:50.667084 master-0 kubenswrapper[30420]: W0318 10:21:50.667021 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0223b511_6041_4268_9c8a_079924b86793.slice/crio-ad66d0991e131793cf58b60406e9a31a850024bfb7981d3a3845c6a2010445bf WatchSource:0}: Error finding container ad66d0991e131793cf58b60406e9a31a850024bfb7981d3a3845c6a2010445bf: Status 404 returned error can't find the container with id ad66d0991e131793cf58b60406e9a31a850024bfb7981d3a3845c6a2010445bf Mar 18 10:21:50.762902 master-0 kubenswrapper[30420]: I0318 10:21:50.762798 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-memberlist\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:50.763432 master-0 kubenswrapper[30420]: E0318 10:21:50.763000 30420 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 10:21:50.763540 master-0 kubenswrapper[30420]: E0318 10:21:50.763483 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-memberlist podName:9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3 nodeName:}" failed. No retries permitted until 2026-03-18 10:21:51.763456755 +0000 UTC m=+675.816202724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-memberlist") pod "speaker-nv5ns" (UID: "9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3") : secret "metallb-memberlist" not found Mar 18 10:21:50.876947 master-0 kubenswrapper[30420]: I0318 10:21:50.876794 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-w62wg"] Mar 18 10:21:50.883526 master-0 kubenswrapper[30420]: I0318 10:21:50.883446 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" event={"ID":"0223b511-6041-4268-9c8a-079924b86793","Type":"ContainerStarted","Data":"ad66d0991e131793cf58b60406e9a31a850024bfb7981d3a3845c6a2010445bf"} Mar 18 10:21:50.884912 master-0 kubenswrapper[30420]: I0318 10:21:50.884865 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerStarted","Data":"aeb9d3ac4c8c98897ccfb0faca99ecb6a8ae00c2ee4aa85246884036131fadc6"} Mar 18 10:21:50.886317 master-0 kubenswrapper[30420]: W0318 10:21:50.886268 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf659e73_45f8_4601_a966_c2de80fd6ba2.slice/crio-3c5b65b06b64c234e0ab0290a88e9affe5de13c169f63def586174995c5d5dbf WatchSource:0}: Error finding container 3c5b65b06b64c234e0ab0290a88e9affe5de13c169f63def586174995c5d5dbf: Status 404 returned error can't find the container with id 3c5b65b06b64c234e0ab0290a88e9affe5de13c169f63def586174995c5d5dbf Mar 18 10:21:51.790045 master-0 kubenswrapper[30420]: I0318 10:21:51.789944 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-memberlist\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:51.795856 master-0 kubenswrapper[30420]: I0318 10:21:51.794226 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3-memberlist\") pod \"speaker-nv5ns\" (UID: \"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3\") " pod="metallb-system/speaker-nv5ns" Mar 18 10:21:51.877578 master-0 kubenswrapper[30420]: I0318 10:21:51.877490 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nv5ns" Mar 18 10:21:51.940996 master-0 kubenswrapper[30420]: I0318 10:21:51.936070 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-w62wg" event={"ID":"df659e73-45f8-4601-a966-c2de80fd6ba2","Type":"ContainerStarted","Data":"ba02bde6e5e6044660ac53a0234c5f8c04eb88790dc9418176a2226837cfa18b"} Mar 18 10:21:51.940996 master-0 kubenswrapper[30420]: I0318 10:21:51.936177 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-w62wg" event={"ID":"df659e73-45f8-4601-a966-c2de80fd6ba2","Type":"ContainerStarted","Data":"3c5b65b06b64c234e0ab0290a88e9affe5de13c169f63def586174995c5d5dbf"} Mar 18 10:21:52.281963 master-0 kubenswrapper[30420]: I0318 10:21:52.278774 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg"] Mar 18 10:21:52.289111 master-0 kubenswrapper[30420]: I0318 10:21:52.285875 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg" Mar 18 10:21:52.305399 master-0 kubenswrapper[30420]: I0318 10:21:52.305332 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm"] Mar 18 10:21:52.307437 master-0 kubenswrapper[30420]: I0318 10:21:52.307402 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:21:52.310679 master-0 kubenswrapper[30420]: I0318 10:21:52.310646 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 18 10:21:52.314984 master-0 kubenswrapper[30420]: I0318 10:21:52.314944 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm"] Mar 18 10:21:52.335572 master-0 kubenswrapper[30420]: I0318 10:21:52.330520 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-clbx9"] Mar 18 10:21:52.335572 master-0 kubenswrapper[30420]: I0318 10:21:52.334518 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.341530 master-0 kubenswrapper[30420]: I0318 10:21:52.339495 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg"] Mar 18 10:21:52.405355 master-0 kubenswrapper[30420]: I0318 10:21:52.404991 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/49f160ff-7093-4d65-99b5-51ea63e10306-dbus-socket\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.405355 master-0 kubenswrapper[30420]: I0318 10:21:52.405059 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7stqb\" (UniqueName: \"kubernetes.io/projected/8d3ab44d-452a-4080-b985-0e24d2d5bf5d-kube-api-access-7stqb\") pod \"nmstate-webhook-5f558f5558-bfbvm\" (UID: \"8d3ab44d-452a-4080-b985-0e24d2d5bf5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:21:52.405355 master-0 kubenswrapper[30420]: I0318 10:21:52.405111 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/49f160ff-7093-4d65-99b5-51ea63e10306-nmstate-lock\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.405355 master-0 kubenswrapper[30420]: I0318 10:21:52.405143 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/49f160ff-7093-4d65-99b5-51ea63e10306-ovs-socket\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.405355 master-0 kubenswrapper[30420]: I0318 10:21:52.405163 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8d3ab44d-452a-4080-b985-0e24d2d5bf5d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-bfbvm\" (UID: \"8d3ab44d-452a-4080-b985-0e24d2d5bf5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:21:52.405355 master-0 kubenswrapper[30420]: I0318 10:21:52.405199 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x9jq\" (UniqueName: \"kubernetes.io/projected/49f160ff-7093-4d65-99b5-51ea63e10306-kube-api-access-2x9jq\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.405355 master-0 kubenswrapper[30420]: I0318 10:21:52.405234 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4rp\" (UniqueName: \"kubernetes.io/projected/feb29aaa-6472-498d-9362-9f56312e248a-kube-api-access-wg4rp\") pod \"nmstate-metrics-9b8c8685d-v4pqg\" (UID: \"feb29aaa-6472-498d-9362-9f56312e248a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg" Mar 18 10:21:52.457097 master-0 kubenswrapper[30420]: I0318 10:21:52.457011 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl"] Mar 18 10:21:52.460222 master-0 kubenswrapper[30420]: I0318 10:21:52.459817 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.484312 master-0 kubenswrapper[30420]: I0318 10:21:52.484110 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 18 10:21:52.484312 master-0 kubenswrapper[30420]: I0318 10:21:52.484249 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 18 10:21:52.511676 master-0 kubenswrapper[30420]: I0318 10:21:52.511618 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x9jq\" (UniqueName: \"kubernetes.io/projected/49f160ff-7093-4d65-99b5-51ea63e10306-kube-api-access-2x9jq\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.511921 master-0 kubenswrapper[30420]: I0318 10:21:52.511688 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg4rp\" (UniqueName: \"kubernetes.io/projected/feb29aaa-6472-498d-9362-9f56312e248a-kube-api-access-wg4rp\") pod \"nmstate-metrics-9b8c8685d-v4pqg\" (UID: \"feb29aaa-6472-498d-9362-9f56312e248a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg" Mar 18 10:21:52.511921 master-0 kubenswrapper[30420]: I0318 10:21:52.511723 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f9b32010-f07f-4e5a-a7a6-10e14dd65d91-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-64gfl\" (UID: \"f9b32010-f07f-4e5a-a7a6-10e14dd65d91\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.511921 master-0 kubenswrapper[30420]: I0318 10:21:52.511753 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/49f160ff-7093-4d65-99b5-51ea63e10306-dbus-socket\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.511921 master-0 kubenswrapper[30420]: I0318 10:21:52.511771 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649qx\" (UniqueName: \"kubernetes.io/projected/f9b32010-f07f-4e5a-a7a6-10e14dd65d91-kube-api-access-649qx\") pod \"nmstate-console-plugin-86f58fcf4-64gfl\" (UID: \"f9b32010-f07f-4e5a-a7a6-10e14dd65d91\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.511921 master-0 kubenswrapper[30420]: I0318 10:21:52.511790 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7stqb\" (UniqueName: \"kubernetes.io/projected/8d3ab44d-452a-4080-b985-0e24d2d5bf5d-kube-api-access-7stqb\") pod \"nmstate-webhook-5f558f5558-bfbvm\" (UID: \"8d3ab44d-452a-4080-b985-0e24d2d5bf5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:21:52.511921 master-0 kubenswrapper[30420]: I0318 10:21:52.511833 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9b32010-f07f-4e5a-a7a6-10e14dd65d91-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-64gfl\" (UID: \"f9b32010-f07f-4e5a-a7a6-10e14dd65d91\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.511921 master-0 kubenswrapper[30420]: I0318 10:21:52.511861 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/49f160ff-7093-4d65-99b5-51ea63e10306-nmstate-lock\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.512276 master-0 kubenswrapper[30420]: I0318 10:21:52.512240 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/49f160ff-7093-4d65-99b5-51ea63e10306-ovs-socket\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.512342 master-0 kubenswrapper[30420]: I0318 10:21:52.512303 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8d3ab44d-452a-4080-b985-0e24d2d5bf5d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-bfbvm\" (UID: \"8d3ab44d-452a-4080-b985-0e24d2d5bf5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:21:52.512559 master-0 kubenswrapper[30420]: E0318 10:21:52.512471 30420 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 18 10:21:52.512559 master-0 kubenswrapper[30420]: E0318 10:21:52.512540 30420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d3ab44d-452a-4080-b985-0e24d2d5bf5d-tls-key-pair podName:8d3ab44d-452a-4080-b985-0e24d2d5bf5d nodeName:}" failed. No retries permitted until 2026-03-18 10:21:53.012525394 +0000 UTC m=+677.065271323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/8d3ab44d-452a-4080-b985-0e24d2d5bf5d-tls-key-pair") pod "nmstate-webhook-5f558f5558-bfbvm" (UID: "8d3ab44d-452a-4080-b985-0e24d2d5bf5d") : secret "openshift-nmstate-webhook" not found Mar 18 10:21:52.512778 master-0 kubenswrapper[30420]: I0318 10:21:52.512660 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/49f160ff-7093-4d65-99b5-51ea63e10306-nmstate-lock\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.512935 master-0 kubenswrapper[30420]: I0318 10:21:52.512800 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/49f160ff-7093-4d65-99b5-51ea63e10306-ovs-socket\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.513518 master-0 kubenswrapper[30420]: I0318 10:21:52.513494 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/49f160ff-7093-4d65-99b5-51ea63e10306-dbus-socket\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.513587 master-0 kubenswrapper[30420]: I0318 10:21:52.513530 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl"] Mar 18 10:21:52.548644 master-0 kubenswrapper[30420]: I0318 10:21:52.548601 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7stqb\" (UniqueName: \"kubernetes.io/projected/8d3ab44d-452a-4080-b985-0e24d2d5bf5d-kube-api-access-7stqb\") pod \"nmstate-webhook-5f558f5558-bfbvm\" (UID: \"8d3ab44d-452a-4080-b985-0e24d2d5bf5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:21:52.550703 master-0 kubenswrapper[30420]: I0318 10:21:52.550672 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg4rp\" (UniqueName: \"kubernetes.io/projected/feb29aaa-6472-498d-9362-9f56312e248a-kube-api-access-wg4rp\") pod \"nmstate-metrics-9b8c8685d-v4pqg\" (UID: \"feb29aaa-6472-498d-9362-9f56312e248a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg" Mar 18 10:21:52.554120 master-0 kubenswrapper[30420]: I0318 10:21:52.551661 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x9jq\" (UniqueName: \"kubernetes.io/projected/49f160ff-7093-4d65-99b5-51ea63e10306-kube-api-access-2x9jq\") pod \"nmstate-handler-clbx9\" (UID: \"49f160ff-7093-4d65-99b5-51ea63e10306\") " pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.626580 master-0 kubenswrapper[30420]: I0318 10:21:52.619400 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f9b32010-f07f-4e5a-a7a6-10e14dd65d91-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-64gfl\" (UID: \"f9b32010-f07f-4e5a-a7a6-10e14dd65d91\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.626580 master-0 kubenswrapper[30420]: I0318 10:21:52.619469 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649qx\" (UniqueName: \"kubernetes.io/projected/f9b32010-f07f-4e5a-a7a6-10e14dd65d91-kube-api-access-649qx\") pod \"nmstate-console-plugin-86f58fcf4-64gfl\" (UID: \"f9b32010-f07f-4e5a-a7a6-10e14dd65d91\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.626580 master-0 kubenswrapper[30420]: I0318 10:21:52.619735 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9b32010-f07f-4e5a-a7a6-10e14dd65d91-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-64gfl\" (UID: \"f9b32010-f07f-4e5a-a7a6-10e14dd65d91\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.626580 master-0 kubenswrapper[30420]: I0318 10:21:52.621566 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f9b32010-f07f-4e5a-a7a6-10e14dd65d91-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-64gfl\" (UID: \"f9b32010-f07f-4e5a-a7a6-10e14dd65d91\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.626580 master-0 kubenswrapper[30420]: I0318 10:21:52.623408 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9b32010-f07f-4e5a-a7a6-10e14dd65d91-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-64gfl\" (UID: \"f9b32010-f07f-4e5a-a7a6-10e14dd65d91\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.645075 master-0 kubenswrapper[30420]: I0318 10:21:52.643217 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg" Mar 18 10:21:52.650672 master-0 kubenswrapper[30420]: I0318 10:21:52.649800 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649qx\" (UniqueName: \"kubernetes.io/projected/f9b32010-f07f-4e5a-a7a6-10e14dd65d91-kube-api-access-649qx\") pod \"nmstate-console-plugin-86f58fcf4-64gfl\" (UID: \"f9b32010-f07f-4e5a-a7a6-10e14dd65d91\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.687153 master-0 kubenswrapper[30420]: I0318 10:21:52.687008 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79d8975cbd-5smbb"] Mar 18 10:21:52.688199 master-0 kubenswrapper[30420]: I0318 10:21:52.688177 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.702016 master-0 kubenswrapper[30420]: I0318 10:21:52.701895 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:21:52.707051 master-0 kubenswrapper[30420]: I0318 10:21:52.706421 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79d8975cbd-5smbb"] Mar 18 10:21:52.852709 master-0 kubenswrapper[30420]: I0318 10:21:52.852646 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" Mar 18 10:21:52.857652 master-0 kubenswrapper[30420]: I0318 10:21:52.855992 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-service-ca\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.857652 master-0 kubenswrapper[30420]: I0318 10:21:52.856068 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a42b137a-07fc-4146-8b2e-086c398dccef-console-oauth-config\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.857652 master-0 kubenswrapper[30420]: I0318 10:21:52.856199 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-console-config\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.857652 master-0 kubenswrapper[30420]: I0318 10:21:52.856239 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxh4t\" (UniqueName: \"kubernetes.io/projected/a42b137a-07fc-4146-8b2e-086c398dccef-kube-api-access-mxh4t\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.857652 master-0 kubenswrapper[30420]: I0318 10:21:52.856364 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-trusted-ca-bundle\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.857652 master-0 kubenswrapper[30420]: I0318 10:21:52.856393 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a42b137a-07fc-4146-8b2e-086c398dccef-console-serving-cert\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.857652 master-0 kubenswrapper[30420]: I0318 10:21:52.856484 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-oauth-serving-cert\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.958810 master-0 kubenswrapper[30420]: I0318 10:21:52.958604 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-service-ca\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.958810 master-0 kubenswrapper[30420]: I0318 10:21:52.958670 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a42b137a-07fc-4146-8b2e-086c398dccef-console-oauth-config\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.958810 master-0 kubenswrapper[30420]: I0318 10:21:52.958723 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-console-config\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.958810 master-0 kubenswrapper[30420]: I0318 10:21:52.958747 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxh4t\" (UniqueName: \"kubernetes.io/projected/a42b137a-07fc-4146-8b2e-086c398dccef-kube-api-access-mxh4t\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.958810 master-0 kubenswrapper[30420]: I0318 10:21:52.958796 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-trusted-ca-bundle\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.958810 master-0 kubenswrapper[30420]: I0318 10:21:52.958816 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a42b137a-07fc-4146-8b2e-086c398dccef-console-serving-cert\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.959673 master-0 kubenswrapper[30420]: I0318 10:21:52.958882 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-oauth-serving-cert\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.959673 master-0 kubenswrapper[30420]: I0318 10:21:52.959630 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-oauth-serving-cert\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.962929 master-0 kubenswrapper[30420]: I0318 10:21:52.961951 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-trusted-ca-bundle\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.964849 master-0 kubenswrapper[30420]: I0318 10:21:52.963836 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-console-config\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.964849 master-0 kubenswrapper[30420]: I0318 10:21:52.964511 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a42b137a-07fc-4146-8b2e-086c398dccef-service-ca\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.967805 master-0 kubenswrapper[30420]: I0318 10:21:52.967771 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a42b137a-07fc-4146-8b2e-086c398dccef-console-oauth-config\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.978966 master-0 kubenswrapper[30420]: I0318 10:21:52.978876 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-w62wg" event={"ID":"df659e73-45f8-4601-a966-c2de80fd6ba2","Type":"ContainerStarted","Data":"ec641fe5bfc656d4999f6ae6ae50b1a90b4c2b80d27056aab28588a513f02b4f"} Mar 18 10:21:52.979226 master-0 kubenswrapper[30420]: I0318 10:21:52.979208 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:21:52.986601 master-0 kubenswrapper[30420]: I0318 10:21:52.981610 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a42b137a-07fc-4146-8b2e-086c398dccef-console-serving-cert\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:52.986601 master-0 kubenswrapper[30420]: I0318 10:21:52.983565 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nv5ns" event={"ID":"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3","Type":"ContainerStarted","Data":"079b319318fa3f1c0b58b9a2f1c23cc9236ff1aa7a08ce63f76d08a6d5198c66"} Mar 18 10:21:52.986601 master-0 kubenswrapper[30420]: I0318 10:21:52.983587 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nv5ns" event={"ID":"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3","Type":"ContainerStarted","Data":"2144ac1aabf706edb8a8f644ccf4a4afc4f3fc4bcd92015dd965dcb902619cc2"} Mar 18 10:21:52.986601 master-0 kubenswrapper[30420]: I0318 10:21:52.983596 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nv5ns" event={"ID":"9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3","Type":"ContainerStarted","Data":"426dafd4899e28dbc6dae1ce153148c4465aa49bef53943f67c4b520cc2119fe"} Mar 18 10:21:52.986601 master-0 kubenswrapper[30420]: I0318 10:21:52.983932 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nv5ns" Mar 18 10:21:52.986601 master-0 kubenswrapper[30420]: I0318 10:21:52.984885 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-clbx9" event={"ID":"49f160ff-7093-4d65-99b5-51ea63e10306","Type":"ContainerStarted","Data":"7d254df314f29a1d6235d0e9cb0671455d66aa1e1a9c467c9d9de3192567f2c9"} Mar 18 10:21:52.990428 master-0 kubenswrapper[30420]: I0318 10:21:52.986493 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxh4t\" (UniqueName: \"kubernetes.io/projected/a42b137a-07fc-4146-8b2e-086c398dccef-kube-api-access-mxh4t\") pod \"console-79d8975cbd-5smbb\" (UID: \"a42b137a-07fc-4146-8b2e-086c398dccef\") " pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:53.038830 master-0 kubenswrapper[30420]: I0318 10:21:53.038568 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nv5ns" podStartSLOduration=4.038547636 podStartE2EDuration="4.038547636s" podCreationTimestamp="2026-03-18 10:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:21:53.036622278 +0000 UTC m=+677.089368227" watchObservedRunningTime="2026-03-18 10:21:53.038547636 +0000 UTC m=+677.091293565" Mar 18 10:21:53.041105 master-0 kubenswrapper[30420]: I0318 10:21:53.039772 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-w62wg" podStartSLOduration=1.8486427920000001 podStartE2EDuration="3.039766577s" podCreationTimestamp="2026-03-18 10:21:50 +0000 UTC" firstStartedPulling="2026-03-18 10:21:51.011318346 +0000 UTC m=+675.064064275" lastFinishedPulling="2026-03-18 10:21:52.202442131 +0000 UTC m=+676.255188060" observedRunningTime="2026-03-18 10:21:53.013258461 +0000 UTC m=+677.066004390" watchObservedRunningTime="2026-03-18 10:21:53.039766577 +0000 UTC m=+677.092512506" Mar 18 10:21:53.049697 master-0 kubenswrapper[30420]: I0318 10:21:53.048582 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:21:53.081008 master-0 kubenswrapper[30420]: I0318 10:21:53.080636 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8d3ab44d-452a-4080-b985-0e24d2d5bf5d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-bfbvm\" (UID: \"8d3ab44d-452a-4080-b985-0e24d2d5bf5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:21:53.087395 master-0 kubenswrapper[30420]: I0318 10:21:53.087338 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8d3ab44d-452a-4080-b985-0e24d2d5bf5d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-bfbvm\" (UID: \"8d3ab44d-452a-4080-b985-0e24d2d5bf5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:21:53.245945 master-0 kubenswrapper[30420]: I0318 10:21:53.245866 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg"] Mar 18 10:21:53.284763 master-0 kubenswrapper[30420]: I0318 10:21:53.284694 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:21:53.372555 master-0 kubenswrapper[30420]: W0318 10:21:53.372457 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b32010_f07f_4e5a_a7a6_10e14dd65d91.slice/crio-f7b6db72be955aa6c03a38a4307216e276443b0c3dd66e4432c500a9a1fe6299 WatchSource:0}: Error finding container f7b6db72be955aa6c03a38a4307216e276443b0c3dd66e4432c500a9a1fe6299: Status 404 returned error can't find the container with id f7b6db72be955aa6c03a38a4307216e276443b0c3dd66e4432c500a9a1fe6299 Mar 18 10:21:53.380195 master-0 kubenswrapper[30420]: I0318 10:21:53.380141 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl"] Mar 18 10:21:53.572589 master-0 kubenswrapper[30420]: I0318 10:21:53.572535 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79d8975cbd-5smbb"] Mar 18 10:21:53.579470 master-0 kubenswrapper[30420]: W0318 10:21:53.579398 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda42b137a_07fc_4146_8b2e_086c398dccef.slice/crio-b9817d275ff837e291fb31d8b4f8e340a5ad143fbbeb3e186589f0c973f440e7 WatchSource:0}: Error finding container b9817d275ff837e291fb31d8b4f8e340a5ad143fbbeb3e186589f0c973f440e7: Status 404 returned error can't find the container with id b9817d275ff837e291fb31d8b4f8e340a5ad143fbbeb3e186589f0c973f440e7 Mar 18 10:21:53.781613 master-0 kubenswrapper[30420]: W0318 10:21:53.781541 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d3ab44d_452a_4080_b985_0e24d2d5bf5d.slice/crio-e8765ccc37116f5d735c4a81a3b358ccc544b52868d781d69e4edfa4c27b6b5b WatchSource:0}: Error finding container e8765ccc37116f5d735c4a81a3b358ccc544b52868d781d69e4edfa4c27b6b5b: Status 404 returned error can't find the container with id e8765ccc37116f5d735c4a81a3b358ccc544b52868d781d69e4edfa4c27b6b5b Mar 18 10:21:53.783327 master-0 kubenswrapper[30420]: I0318 10:21:53.782346 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm"] Mar 18 10:21:53.999330 master-0 kubenswrapper[30420]: I0318 10:21:53.998269 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79d8975cbd-5smbb" event={"ID":"a42b137a-07fc-4146-8b2e-086c398dccef","Type":"ContainerStarted","Data":"417e76b9edd27e0b6fd382eb84976d9d84425b8f0a9bc51a7010fb1efbb9aaa3"} Mar 18 10:21:53.999330 master-0 kubenswrapper[30420]: I0318 10:21:53.998379 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79d8975cbd-5smbb" event={"ID":"a42b137a-07fc-4146-8b2e-086c398dccef","Type":"ContainerStarted","Data":"b9817d275ff837e291fb31d8b4f8e340a5ad143fbbeb3e186589f0c973f440e7"} Mar 18 10:21:54.001153 master-0 kubenswrapper[30420]: I0318 10:21:54.001090 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" event={"ID":"8d3ab44d-452a-4080-b985-0e24d2d5bf5d","Type":"ContainerStarted","Data":"e8765ccc37116f5d735c4a81a3b358ccc544b52868d781d69e4edfa4c27b6b5b"} Mar 18 10:21:54.004144 master-0 kubenswrapper[30420]: I0318 10:21:54.004101 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg" event={"ID":"feb29aaa-6472-498d-9362-9f56312e248a","Type":"ContainerStarted","Data":"ced582e4f75cc48ab513a19abc3a9f0762cf3976d179237063810a8afaddf855"} Mar 18 10:21:54.006993 master-0 kubenswrapper[30420]: I0318 10:21:54.006938 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" event={"ID":"f9b32010-f07f-4e5a-a7a6-10e14dd65d91","Type":"ContainerStarted","Data":"f7b6db72be955aa6c03a38a4307216e276443b0c3dd66e4432c500a9a1fe6299"} Mar 18 10:21:54.039977 master-0 kubenswrapper[30420]: I0318 10:21:54.037581 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79d8975cbd-5smbb" podStartSLOduration=2.03754669 podStartE2EDuration="2.03754669s" podCreationTimestamp="2026-03-18 10:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:21:54.021765394 +0000 UTC m=+678.074511343" watchObservedRunningTime="2026-03-18 10:21:54.03754669 +0000 UTC m=+678.090292619" Mar 18 10:22:00.069394 master-0 kubenswrapper[30420]: I0318 10:22:00.069195 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg" event={"ID":"feb29aaa-6472-498d-9362-9f56312e248a","Type":"ContainerStarted","Data":"635c887211965a8f2be3886be116fcc75ed6faaed3f780701be64fb8f12c584b"} Mar 18 10:22:00.069394 master-0 kubenswrapper[30420]: I0318 10:22:00.069263 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg" event={"ID":"feb29aaa-6472-498d-9362-9f56312e248a","Type":"ContainerStarted","Data":"c12cf7cceb3211c0073e61eb93b636c25c8acc2a1b2ddd76b8c2557b845bcb21"} Mar 18 10:22:00.071534 master-0 kubenswrapper[30420]: I0318 10:22:00.071485 30420 generic.go:334] "Generic (PLEG): container finished" podID="2011b02b-e4a7-43ac-af50-d30a48d38b1b" containerID="93749fb17f9540c600e8beeaf70f960f0cf6e85c0a463b856dc830610458a542" exitCode=0 Mar 18 10:22:00.071690 master-0 kubenswrapper[30420]: I0318 10:22:00.071559 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerDied","Data":"93749fb17f9540c600e8beeaf70f960f0cf6e85c0a463b856dc830610458a542"} Mar 18 10:22:00.076018 master-0 kubenswrapper[30420]: I0318 10:22:00.075948 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-clbx9" event={"ID":"49f160ff-7093-4d65-99b5-51ea63e10306","Type":"ContainerStarted","Data":"dcf05be707294c00e7f49185f74d947728c83c42d3852bab41f62c8acb782082"} Mar 18 10:22:00.077094 master-0 kubenswrapper[30420]: I0318 10:22:00.077050 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:22:00.079794 master-0 kubenswrapper[30420]: I0318 10:22:00.079745 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" event={"ID":"8d3ab44d-452a-4080-b985-0e24d2d5bf5d","Type":"ContainerStarted","Data":"533aeca021c3e4b80c50eb4df8bca1d86f04a4c2f8dbcf182cdd8ae8900efcec"} Mar 18 10:22:00.083648 master-0 kubenswrapper[30420]: I0318 10:22:00.083586 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" event={"ID":"0223b511-6041-4268-9c8a-079924b86793","Type":"ContainerStarted","Data":"4e35b5427d939f25caadca02f9eb4ea93ea3fb9970929a32cf91252d6b9d920c"} Mar 18 10:22:00.084431 master-0 kubenswrapper[30420]: I0318 10:22:00.084383 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:22:00.084694 master-0 kubenswrapper[30420]: I0318 10:22:00.084666 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:22:00.095095 master-0 kubenswrapper[30420]: I0318 10:22:00.094938 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-v4pqg" podStartSLOduration=2.198618913 podStartE2EDuration="8.094916851s" podCreationTimestamp="2026-03-18 10:21:52 +0000 UTC" firstStartedPulling="2026-03-18 10:21:53.266475027 +0000 UTC m=+677.319220956" lastFinishedPulling="2026-03-18 10:21:59.162772965 +0000 UTC m=+683.215518894" observedRunningTime="2026-03-18 10:22:00.09169435 +0000 UTC m=+684.144440279" watchObservedRunningTime="2026-03-18 10:22:00.094916851 +0000 UTC m=+684.147662780" Mar 18 10:22:00.154193 master-0 kubenswrapper[30420]: I0318 10:22:00.154077 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" podStartSLOduration=2.752042833 podStartE2EDuration="8.154051455s" podCreationTimestamp="2026-03-18 10:21:52 +0000 UTC" firstStartedPulling="2026-03-18 10:21:53.784921659 +0000 UTC m=+677.837667588" lastFinishedPulling="2026-03-18 10:21:59.186930281 +0000 UTC m=+683.239676210" observedRunningTime="2026-03-18 10:22:00.145798367 +0000 UTC m=+684.198544296" watchObservedRunningTime="2026-03-18 10:22:00.154051455 +0000 UTC m=+684.206797384" Mar 18 10:22:00.187292 master-0 kubenswrapper[30420]: I0318 10:22:00.181213 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" podStartSLOduration=2.687641171 podStartE2EDuration="11.181194286s" podCreationTimestamp="2026-03-18 10:21:49 +0000 UTC" firstStartedPulling="2026-03-18 10:21:50.670616765 +0000 UTC m=+674.723362694" lastFinishedPulling="2026-03-18 10:21:59.16416988 +0000 UTC m=+683.216915809" observedRunningTime="2026-03-18 10:22:00.167635766 +0000 UTC m=+684.220381695" watchObservedRunningTime="2026-03-18 10:22:00.181194286 +0000 UTC m=+684.233940215" Mar 18 10:22:00.239884 master-0 kubenswrapper[30420]: I0318 10:22:00.239770 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-clbx9" podStartSLOduration=1.8295799000000001 podStartE2EDuration="8.239749046s" podCreationTimestamp="2026-03-18 10:21:52 +0000 UTC" firstStartedPulling="2026-03-18 10:21:52.753959983 +0000 UTC m=+676.806705912" lastFinishedPulling="2026-03-18 10:21:59.164129129 +0000 UTC m=+683.216875058" observedRunningTime="2026-03-18 10:22:00.23474833 +0000 UTC m=+684.287494269" watchObservedRunningTime="2026-03-18 10:22:00.239749046 +0000 UTC m=+684.292494975" Mar 18 10:22:01.094496 master-0 kubenswrapper[30420]: I0318 10:22:01.094028 30420 generic.go:334] "Generic (PLEG): container finished" podID="2011b02b-e4a7-43ac-af50-d30a48d38b1b" containerID="b60e35612d4df6fc89c02866d815aaa282ba1e798c374d5cc0d6021a1cc07d3c" exitCode=0 Mar 18 10:22:01.095634 master-0 kubenswrapper[30420]: I0318 10:22:01.095587 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerDied","Data":"b60e35612d4df6fc89c02866d815aaa282ba1e798c374d5cc0d6021a1cc07d3c"} Mar 18 10:22:02.112339 master-0 kubenswrapper[30420]: I0318 10:22:02.111862 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" event={"ID":"f9b32010-f07f-4e5a-a7a6-10e14dd65d91","Type":"ContainerStarted","Data":"4a80d5be826b50d9f5666f0b023ab658fc943bf8db67b7eb12cfbb2ae62e7894"} Mar 18 10:22:02.116548 master-0 kubenswrapper[30420]: I0318 10:22:02.116481 30420 generic.go:334] "Generic (PLEG): container finished" podID="2011b02b-e4a7-43ac-af50-d30a48d38b1b" containerID="2751fb34df7296ccea4cf0696a7ba993aed3881bf7e2a90cbeb217bd327b18a3" exitCode=0 Mar 18 10:22:02.116979 master-0 kubenswrapper[30420]: I0318 10:22:02.116576 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerDied","Data":"2751fb34df7296ccea4cf0696a7ba993aed3881bf7e2a90cbeb217bd327b18a3"} Mar 18 10:22:02.155871 master-0 kubenswrapper[30420]: I0318 10:22:02.155088 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-64gfl" podStartSLOduration=2.100795616 podStartE2EDuration="10.155057416s" podCreationTimestamp="2026-03-18 10:21:52 +0000 UTC" firstStartedPulling="2026-03-18 10:21:53.375799521 +0000 UTC m=+677.428545450" lastFinishedPulling="2026-03-18 10:22:01.430061321 +0000 UTC m=+685.482807250" observedRunningTime="2026-03-18 10:22:02.13688981 +0000 UTC m=+686.189635769" watchObservedRunningTime="2026-03-18 10:22:02.155057416 +0000 UTC m=+686.207803375" Mar 18 10:22:03.050539 master-0 kubenswrapper[30420]: I0318 10:22:03.050405 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:22:03.050539 master-0 kubenswrapper[30420]: I0318 10:22:03.050508 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:22:03.060329 master-0 kubenswrapper[30420]: I0318 10:22:03.060242 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:22:03.128544 master-0 kubenswrapper[30420]: I0318 10:22:03.128473 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerStarted","Data":"030cb90e7d3acce2b4b55c7f53d34d518a8ee4cfbf95d27d6f89b1f68dc07cf7"} Mar 18 10:22:03.128544 master-0 kubenswrapper[30420]: I0318 10:22:03.128545 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerStarted","Data":"ae4ad26da701431c3869964963725f54430658339de27ac022afc1c27247bc61"} Mar 18 10:22:03.129469 master-0 kubenswrapper[30420]: I0318 10:22:03.128561 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerStarted","Data":"42179af88d6f67c10d58466ca3673c9a326340d60d4cb7ff3300bee467dc914c"} Mar 18 10:22:03.129469 master-0 kubenswrapper[30420]: I0318 10:22:03.128573 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerStarted","Data":"3cdce5977e0e6956f5a7ccf35e05f242cbd99c78ddaf6827fba7407217638e32"} Mar 18 10:22:03.129469 master-0 kubenswrapper[30420]: I0318 10:22:03.128585 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerStarted","Data":"4872d84ea5ed40a8c56085e7ece8273de6b54ef8fc1b6df0411b1182aa463f75"} Mar 18 10:22:03.133457 master-0 kubenswrapper[30420]: I0318 10:22:03.133415 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79d8975cbd-5smbb" Mar 18 10:22:03.221912 master-0 kubenswrapper[30420]: I0318 10:22:03.221551 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6659f98f4-ccs7g"] Mar 18 10:22:04.149982 master-0 kubenswrapper[30420]: I0318 10:22:04.149697 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8gm" event={"ID":"2011b02b-e4a7-43ac-af50-d30a48d38b1b","Type":"ContainerStarted","Data":"32718fee952519a0d0cd82fd35b6727fe91bd8d4b36900a1b069f026668f922d"} Mar 18 10:22:04.200189 master-0 kubenswrapper[30420]: I0318 10:22:04.200102 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jr8gm" podStartSLOduration=6.457261182 podStartE2EDuration="15.200080184s" podCreationTimestamp="2026-03-18 10:21:49 +0000 UTC" firstStartedPulling="2026-03-18 10:21:50.422292842 +0000 UTC m=+674.475038761" lastFinishedPulling="2026-03-18 10:21:59.165111793 +0000 UTC m=+683.217857763" observedRunningTime="2026-03-18 10:22:04.199338615 +0000 UTC m=+688.252084554" watchObservedRunningTime="2026-03-18 10:22:04.200080184 +0000 UTC m=+688.252826123" Mar 18 10:22:05.164653 master-0 kubenswrapper[30420]: I0318 10:22:05.164559 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:22:05.279744 master-0 kubenswrapper[30420]: I0318 10:22:05.279575 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:22:05.332474 master-0 kubenswrapper[30420]: I0318 10:22:05.332348 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:22:07.731092 master-0 kubenswrapper[30420]: I0318 10:22:07.731033 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-clbx9" Mar 18 10:22:10.262603 master-0 kubenswrapper[30420]: I0318 10:22:10.262513 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqcwb" Mar 18 10:22:10.416237 master-0 kubenswrapper[30420]: I0318 10:22:10.416159 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-w62wg" Mar 18 10:22:11.880143 master-0 kubenswrapper[30420]: I0318 10:22:11.880092 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nv5ns" Mar 18 10:22:13.294081 master-0 kubenswrapper[30420]: I0318 10:22:13.293963 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-bfbvm" Mar 18 10:22:18.468874 master-0 kubenswrapper[30420]: I0318 10:22:18.468767 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-ccdmj"] Mar 18 10:22:18.471208 master-0 kubenswrapper[30420]: I0318 10:22:18.471160 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.475236 master-0 kubenswrapper[30420]: I0318 10:22:18.475199 30420 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 18 10:22:18.479421 master-0 kubenswrapper[30420]: I0318 10:22:18.479374 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-ccdmj"] Mar 18 10:22:18.507007 master-0 kubenswrapper[30420]: I0318 10:22:18.506948 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-pod-volumes-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507198 master-0 kubenswrapper[30420]: I0318 10:22:18.507024 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-csi-plugin-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507198 master-0 kubenswrapper[30420]: I0318 10:22:18.507049 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-node-plugin-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507198 master-0 kubenswrapper[30420]: I0318 10:22:18.507069 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-sys\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507198 master-0 kubenswrapper[30420]: I0318 10:22:18.507086 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-run-udev\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507352 master-0 kubenswrapper[30420]: I0318 10:22:18.507211 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dqn\" (UniqueName: \"kubernetes.io/projected/b261d945-73cc-4fe4-acf6-55c7d01fbad0-kube-api-access-x6dqn\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507352 master-0 kubenswrapper[30420]: I0318 10:22:18.507330 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-lvmd-config\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507411 master-0 kubenswrapper[30420]: I0318 10:22:18.507379 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-device-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507469 master-0 kubenswrapper[30420]: I0318 10:22:18.507442 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-file-lock-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507573 master-0 kubenswrapper[30420]: I0318 10:22:18.507552 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-registration-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.507612 master-0 kubenswrapper[30420]: I0318 10:22:18.507587 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b261d945-73cc-4fe4-acf6-55c7d01fbad0-metrics-cert\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.608961 master-0 kubenswrapper[30420]: I0318 10:22:18.608905 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-device-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609186 master-0 kubenswrapper[30420]: I0318 10:22:18.608979 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-file-lock-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609186 master-0 kubenswrapper[30420]: I0318 10:22:18.609025 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-registration-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609186 master-0 kubenswrapper[30420]: I0318 10:22:18.609041 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b261d945-73cc-4fe4-acf6-55c7d01fbad0-metrics-cert\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609186 master-0 kubenswrapper[30420]: I0318 10:22:18.609078 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-pod-volumes-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609323 master-0 kubenswrapper[30420]: I0318 10:22:18.609217 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-device-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609323 master-0 kubenswrapper[30420]: I0318 10:22:18.609293 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-csi-plugin-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609387 master-0 kubenswrapper[30420]: I0318 10:22:18.609334 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-node-plugin-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609387 master-0 kubenswrapper[30420]: I0318 10:22:18.609374 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-sys\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609450 master-0 kubenswrapper[30420]: I0318 10:22:18.609391 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-run-udev\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609482 master-0 kubenswrapper[30420]: I0318 10:22:18.609450 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dqn\" (UniqueName: \"kubernetes.io/projected/b261d945-73cc-4fe4-acf6-55c7d01fbad0-kube-api-access-x6dqn\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609515 master-0 kubenswrapper[30420]: I0318 10:22:18.609497 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-lvmd-config\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609761 master-0 kubenswrapper[30420]: I0318 10:22:18.609739 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-lvmd-config\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609833 master-0 kubenswrapper[30420]: I0318 10:22:18.609796 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-pod-volumes-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.609877 master-0 kubenswrapper[30420]: I0318 10:22:18.609846 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-registration-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.610416 master-0 kubenswrapper[30420]: I0318 10:22:18.610261 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-sys\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.610416 master-0 kubenswrapper[30420]: I0318 10:22:18.610279 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-run-udev\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.610416 master-0 kubenswrapper[30420]: I0318 10:22:18.610394 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-csi-plugin-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.610581 master-0 kubenswrapper[30420]: I0318 10:22:18.610524 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-node-plugin-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.610955 master-0 kubenswrapper[30420]: I0318 10:22:18.610847 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/b261d945-73cc-4fe4-acf6-55c7d01fbad0-file-lock-dir\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.613317 master-0 kubenswrapper[30420]: I0318 10:22:18.613139 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b261d945-73cc-4fe4-acf6-55c7d01fbad0-metrics-cert\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.635650 master-0 kubenswrapper[30420]: I0318 10:22:18.635607 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dqn\" (UniqueName: \"kubernetes.io/projected/b261d945-73cc-4fe4-acf6-55c7d01fbad0-kube-api-access-x6dqn\") pod \"vg-manager-ccdmj\" (UID: \"b261d945-73cc-4fe4-acf6-55c7d01fbad0\") " pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:18.829101 master-0 kubenswrapper[30420]: I0318 10:22:18.829029 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:19.386259 master-0 kubenswrapper[30420]: I0318 10:22:19.386184 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-ccdmj"] Mar 18 10:22:20.281162 master-0 kubenswrapper[30420]: I0318 10:22:20.281087 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jr8gm" Mar 18 10:22:20.330366 master-0 kubenswrapper[30420]: I0318 10:22:20.330295 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-ccdmj" event={"ID":"b261d945-73cc-4fe4-acf6-55c7d01fbad0","Type":"ContainerStarted","Data":"8c3048932b40406e0d9d44c60d3ff7722b53425411fad0a2b4914f9cf7f2bfc2"} Mar 18 10:22:20.330366 master-0 kubenswrapper[30420]: I0318 10:22:20.330351 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-ccdmj" event={"ID":"b261d945-73cc-4fe4-acf6-55c7d01fbad0","Type":"ContainerStarted","Data":"756f1b82d4d8128ac3c175ab1b8855fc0218a40c708adbb682bbbf4885c0e2db"} Mar 18 10:22:20.360658 master-0 kubenswrapper[30420]: I0318 10:22:20.360583 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-ccdmj" podStartSLOduration=2.360565248 podStartE2EDuration="2.360565248s" podCreationTimestamp="2026-03-18 10:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:22:20.354364683 +0000 UTC m=+704.407110612" watchObservedRunningTime="2026-03-18 10:22:20.360565248 +0000 UTC m=+704.413311177" Mar 18 10:22:21.341796 master-0 kubenswrapper[30420]: I0318 10:22:21.341753 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-ccdmj_b261d945-73cc-4fe4-acf6-55c7d01fbad0/vg-manager/0.log" Mar 18 10:22:21.342296 master-0 kubenswrapper[30420]: I0318 10:22:21.341803 30420 generic.go:334] "Generic (PLEG): container finished" podID="b261d945-73cc-4fe4-acf6-55c7d01fbad0" containerID="8c3048932b40406e0d9d44c60d3ff7722b53425411fad0a2b4914f9cf7f2bfc2" exitCode=1 Mar 18 10:22:21.342296 master-0 kubenswrapper[30420]: I0318 10:22:21.341852 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-ccdmj" event={"ID":"b261d945-73cc-4fe4-acf6-55c7d01fbad0","Type":"ContainerDied","Data":"8c3048932b40406e0d9d44c60d3ff7722b53425411fad0a2b4914f9cf7f2bfc2"} Mar 18 10:22:21.342296 master-0 kubenswrapper[30420]: I0318 10:22:21.342199 30420 scope.go:117] "RemoveContainer" containerID="8c3048932b40406e0d9d44c60d3ff7722b53425411fad0a2b4914f9cf7f2bfc2" Mar 18 10:22:21.698594 master-0 kubenswrapper[30420]: I0318 10:22:21.691392 30420 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 18 10:22:22.352098 master-0 kubenswrapper[30420]: I0318 10:22:22.352060 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-ccdmj_b261d945-73cc-4fe4-acf6-55c7d01fbad0/vg-manager/0.log" Mar 18 10:22:22.352551 master-0 kubenswrapper[30420]: I0318 10:22:22.352107 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-ccdmj" event={"ID":"b261d945-73cc-4fe4-acf6-55c7d01fbad0","Type":"ContainerStarted","Data":"0c35199e21ad87903316ba235ee350f064b9d6a93f3b5e241fdb65680c208b57"} Mar 18 10:22:22.509451 master-0 kubenswrapper[30420]: I0318 10:22:22.509310 30420 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-18T10:22:21.691449312Z","Handler":null,"Name":""} Mar 18 10:22:22.511424 master-0 kubenswrapper[30420]: I0318 10:22:22.511402 30420 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 18 10:22:22.511570 master-0 kubenswrapper[30420]: I0318 10:22:22.511556 30420 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 18 10:22:25.439716 master-0 kubenswrapper[30420]: I0318 10:22:25.439656 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mpdps"] Mar 18 10:22:25.442079 master-0 kubenswrapper[30420]: I0318 10:22:25.441725 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mpdps" Mar 18 10:22:25.444370 master-0 kubenswrapper[30420]: I0318 10:22:25.444332 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 18 10:22:25.450804 master-0 kubenswrapper[30420]: I0318 10:22:25.450746 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 18 10:22:25.461243 master-0 kubenswrapper[30420]: I0318 10:22:25.459946 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mpdps"] Mar 18 10:22:25.635319 master-0 kubenswrapper[30420]: I0318 10:22:25.635195 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjmq\" (UniqueName: \"kubernetes.io/projected/d1523b24-886d-45df-8a0c-bea036940823-kube-api-access-jjjmq\") pod \"openstack-operator-index-mpdps\" (UID: \"d1523b24-886d-45df-8a0c-bea036940823\") " pod="openstack-operators/openstack-operator-index-mpdps" Mar 18 10:22:25.737348 master-0 kubenswrapper[30420]: I0318 10:22:25.737197 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjjmq\" (UniqueName: \"kubernetes.io/projected/d1523b24-886d-45df-8a0c-bea036940823-kube-api-access-jjjmq\") pod \"openstack-operator-index-mpdps\" (UID: \"d1523b24-886d-45df-8a0c-bea036940823\") " pod="openstack-operators/openstack-operator-index-mpdps" Mar 18 10:22:25.763216 master-0 kubenswrapper[30420]: I0318 10:22:25.763149 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjjmq\" (UniqueName: \"kubernetes.io/projected/d1523b24-886d-45df-8a0c-bea036940823-kube-api-access-jjjmq\") pod \"openstack-operator-index-mpdps\" (UID: \"d1523b24-886d-45df-8a0c-bea036940823\") " pod="openstack-operators/openstack-operator-index-mpdps" Mar 18 10:22:25.768023 master-0 kubenswrapper[30420]: I0318 10:22:25.767974 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mpdps" Mar 18 10:22:26.296853 master-0 kubenswrapper[30420]: I0318 10:22:26.293473 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mpdps"] Mar 18 10:22:26.415930 master-0 kubenswrapper[30420]: I0318 10:22:26.411562 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mpdps" event={"ID":"d1523b24-886d-45df-8a0c-bea036940823","Type":"ContainerStarted","Data":"ca2787868c420ade5dbb78126f1217f09b147421b537ce6efe16c361a68a46c1"} Mar 18 10:22:28.278396 master-0 kubenswrapper[30420]: I0318 10:22:28.278276 30420 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6659f98f4-ccs7g" podUID="0a3e75ac-917b-4aff-a146-89f408145ec5" containerName="console" containerID="cri-o://8c94dbc385994221c233de438ce49c36b013a2b5464bffabec141ab24ec18a6e" gracePeriod=15 Mar 18 10:22:28.435213 master-0 kubenswrapper[30420]: I0318 10:22:28.435108 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mpdps" event={"ID":"d1523b24-886d-45df-8a0c-bea036940823","Type":"ContainerStarted","Data":"ba1b951d867191af26b45a54d8beafb011b5214f391dcd24e7faef1640cec26c"} Mar 18 10:22:28.438309 master-0 kubenswrapper[30420]: I0318 10:22:28.437971 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6659f98f4-ccs7g_0a3e75ac-917b-4aff-a146-89f408145ec5/console/0.log" Mar 18 10:22:28.438309 master-0 kubenswrapper[30420]: I0318 10:22:28.438008 30420 generic.go:334] "Generic (PLEG): container finished" podID="0a3e75ac-917b-4aff-a146-89f408145ec5" containerID="8c94dbc385994221c233de438ce49c36b013a2b5464bffabec141ab24ec18a6e" exitCode=2 Mar 18 10:22:28.438309 master-0 kubenswrapper[30420]: I0318 10:22:28.438033 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6659f98f4-ccs7g" event={"ID":"0a3e75ac-917b-4aff-a146-89f408145ec5","Type":"ContainerDied","Data":"8c94dbc385994221c233de438ce49c36b013a2b5464bffabec141ab24ec18a6e"} Mar 18 10:22:28.467852 master-0 kubenswrapper[30420]: I0318 10:22:28.467688 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mpdps" podStartSLOduration=2.556875616 podStartE2EDuration="3.467669825s" podCreationTimestamp="2026-03-18 10:22:25 +0000 UTC" firstStartedPulling="2026-03-18 10:22:26.314317859 +0000 UTC m=+710.367063788" lastFinishedPulling="2026-03-18 10:22:27.225112058 +0000 UTC m=+711.277857997" observedRunningTime="2026-03-18 10:22:28.464921176 +0000 UTC m=+712.517667115" watchObservedRunningTime="2026-03-18 10:22:28.467669825 +0000 UTC m=+712.520415764" Mar 18 10:22:28.830164 master-0 kubenswrapper[30420]: I0318 10:22:28.830106 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:28.833604 master-0 kubenswrapper[30420]: I0318 10:22:28.833560 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:28.874031 master-0 kubenswrapper[30420]: I0318 10:22:28.873983 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6659f98f4-ccs7g_0a3e75ac-917b-4aff-a146-89f408145ec5/console/0.log" Mar 18 10:22:28.874319 master-0 kubenswrapper[30420]: I0318 10:22:28.874054 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:22:29.022752 master-0 kubenswrapper[30420]: I0318 10:22:29.022638 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-console-config\") pod \"0a3e75ac-917b-4aff-a146-89f408145ec5\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " Mar 18 10:22:29.023092 master-0 kubenswrapper[30420]: I0318 10:22:29.022850 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28tgw\" (UniqueName: \"kubernetes.io/projected/0a3e75ac-917b-4aff-a146-89f408145ec5-kube-api-access-28tgw\") pod \"0a3e75ac-917b-4aff-a146-89f408145ec5\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " Mar 18 10:22:29.023092 master-0 kubenswrapper[30420]: I0318 10:22:29.022921 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-oauth-serving-cert\") pod \"0a3e75ac-917b-4aff-a146-89f408145ec5\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " Mar 18 10:22:29.023092 master-0 kubenswrapper[30420]: I0318 10:22:29.022997 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-service-ca\") pod \"0a3e75ac-917b-4aff-a146-89f408145ec5\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " Mar 18 10:22:29.023092 master-0 kubenswrapper[30420]: I0318 10:22:29.023047 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-oauth-config\") pod \"0a3e75ac-917b-4aff-a146-89f408145ec5\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " Mar 18 10:22:29.023379 master-0 kubenswrapper[30420]: I0318 10:22:29.023110 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-trusted-ca-bundle\") pod \"0a3e75ac-917b-4aff-a146-89f408145ec5\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " Mar 18 10:22:29.023379 master-0 kubenswrapper[30420]: I0318 10:22:29.023149 30420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-serving-cert\") pod \"0a3e75ac-917b-4aff-a146-89f408145ec5\" (UID: \"0a3e75ac-917b-4aff-a146-89f408145ec5\") " Mar 18 10:22:29.023886 master-0 kubenswrapper[30420]: I0318 10:22:29.023812 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0a3e75ac-917b-4aff-a146-89f408145ec5" (UID: "0a3e75ac-917b-4aff-a146-89f408145ec5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:22:29.024151 master-0 kubenswrapper[30420]: I0318 10:22:29.023931 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-console-config" (OuterVolumeSpecName: "console-config") pod "0a3e75ac-917b-4aff-a146-89f408145ec5" (UID: "0a3e75ac-917b-4aff-a146-89f408145ec5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:22:29.024279 master-0 kubenswrapper[30420]: I0318 10:22:29.024247 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0a3e75ac-917b-4aff-a146-89f408145ec5" (UID: "0a3e75ac-917b-4aff-a146-89f408145ec5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:22:29.025280 master-0 kubenswrapper[30420]: I0318 10:22:29.025209 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-service-ca" (OuterVolumeSpecName: "service-ca") pod "0a3e75ac-917b-4aff-a146-89f408145ec5" (UID: "0a3e75ac-917b-4aff-a146-89f408145ec5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 10:22:29.028797 master-0 kubenswrapper[30420]: I0318 10:22:29.028688 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0a3e75ac-917b-4aff-a146-89f408145ec5" (UID: "0a3e75ac-917b-4aff-a146-89f408145ec5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:22:29.029209 master-0 kubenswrapper[30420]: I0318 10:22:29.029169 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a3e75ac-917b-4aff-a146-89f408145ec5-kube-api-access-28tgw" (OuterVolumeSpecName: "kube-api-access-28tgw") pod "0a3e75ac-917b-4aff-a146-89f408145ec5" (UID: "0a3e75ac-917b-4aff-a146-89f408145ec5"). InnerVolumeSpecName "kube-api-access-28tgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 10:22:29.032155 master-0 kubenswrapper[30420]: I0318 10:22:29.032048 30420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0a3e75ac-917b-4aff-a146-89f408145ec5" (UID: "0a3e75ac-917b-4aff-a146-89f408145ec5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 10:22:29.125656 master-0 kubenswrapper[30420]: I0318 10:22:29.125547 30420 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:22:29.125656 master-0 kubenswrapper[30420]: I0318 10:22:29.125617 30420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28tgw\" (UniqueName: \"kubernetes.io/projected/0a3e75ac-917b-4aff-a146-89f408145ec5-kube-api-access-28tgw\") on node \"master-0\" DevicePath \"\"" Mar 18 10:22:29.125656 master-0 kubenswrapper[30420]: I0318 10:22:29.125643 30420 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:22:29.125656 master-0 kubenswrapper[30420]: I0318 10:22:29.125663 30420 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 10:22:29.125656 master-0 kubenswrapper[30420]: I0318 10:22:29.125682 30420 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 10:22:29.126262 master-0 kubenswrapper[30420]: I0318 10:22:29.125705 30420 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a3e75ac-917b-4aff-a146-89f408145ec5-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 10:22:29.126262 master-0 kubenswrapper[30420]: I0318 10:22:29.125725 30420 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a3e75ac-917b-4aff-a146-89f408145ec5-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 10:22:29.448968 master-0 kubenswrapper[30420]: I0318 10:22:29.448803 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6659f98f4-ccs7g_0a3e75ac-917b-4aff-a146-89f408145ec5/console/0.log" Mar 18 10:22:29.449556 master-0 kubenswrapper[30420]: I0318 10:22:29.448967 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6659f98f4-ccs7g" event={"ID":"0a3e75ac-917b-4aff-a146-89f408145ec5","Type":"ContainerDied","Data":"524fcd697476dc8aeba9f98e2e08153b100d3c8cfe6a938c437df38a2198027c"} Mar 18 10:22:29.449556 master-0 kubenswrapper[30420]: I0318 10:22:29.449068 30420 scope.go:117] "RemoveContainer" containerID="8c94dbc385994221c233de438ce49c36b013a2b5464bffabec141ab24ec18a6e" Mar 18 10:22:29.449556 master-0 kubenswrapper[30420]: I0318 10:22:29.449359 30420 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6659f98f4-ccs7g" Mar 18 10:22:29.449556 master-0 kubenswrapper[30420]: I0318 10:22:29.449512 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:29.450451 master-0 kubenswrapper[30420]: I0318 10:22:29.450417 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-ccdmj" Mar 18 10:22:29.576171 master-0 kubenswrapper[30420]: I0318 10:22:29.576118 30420 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6659f98f4-ccs7g"] Mar 18 10:22:29.583973 master-0 kubenswrapper[30420]: I0318 10:22:29.583915 30420 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6659f98f4-ccs7g"] Mar 18 10:22:30.180284 master-0 kubenswrapper[30420]: I0318 10:22:30.180199 30420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a3e75ac-917b-4aff-a146-89f408145ec5" path="/var/lib/kubelet/pods/0a3e75ac-917b-4aff-a146-89f408145ec5/volumes" Mar 18 10:22:35.769236 master-0 kubenswrapper[30420]: I0318 10:22:35.769110 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-mpdps" Mar 18 10:22:35.769236 master-0 kubenswrapper[30420]: I0318 10:22:35.769217 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-mpdps" Mar 18 10:22:35.815009 master-0 kubenswrapper[30420]: I0318 10:22:35.814935 30420 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-mpdps" Mar 18 10:22:36.547599 master-0 kubenswrapper[30420]: I0318 10:22:36.547542 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-mpdps" Mar 18 10:27:37.317642 master-0 kubenswrapper[30420]: I0318 10:27:37.317563 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mp5s6/must-gather-rbzdd"] Mar 18 10:27:37.319736 master-0 kubenswrapper[30420]: E0318 10:27:37.318086 30420 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a3e75ac-917b-4aff-a146-89f408145ec5" containerName="console" Mar 18 10:27:37.319736 master-0 kubenswrapper[30420]: I0318 10:27:37.318101 30420 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a3e75ac-917b-4aff-a146-89f408145ec5" containerName="console" Mar 18 10:27:37.319736 master-0 kubenswrapper[30420]: I0318 10:27:37.318260 30420 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a3e75ac-917b-4aff-a146-89f408145ec5" containerName="console" Mar 18 10:27:37.319736 master-0 kubenswrapper[30420]: I0318 10:27:37.319108 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mp5s6/must-gather-rbzdd" Mar 18 10:27:37.321019 master-0 kubenswrapper[30420]: I0318 10:27:37.320753 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mp5s6"/"openshift-service-ca.crt" Mar 18 10:27:37.321019 master-0 kubenswrapper[30420]: I0318 10:27:37.320871 30420 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mp5s6"/"kube-root-ca.crt" Mar 18 10:27:37.327894 master-0 kubenswrapper[30420]: I0318 10:27:37.327833 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mp5s6/must-gather-bknkh"] Mar 18 10:27:37.329589 master-0 kubenswrapper[30420]: I0318 10:27:37.329550 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mp5s6/must-gather-bknkh" Mar 18 10:27:37.340466 master-0 kubenswrapper[30420]: I0318 10:27:37.340392 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mp5s6/must-gather-rbzdd"] Mar 18 10:27:37.351315 master-0 kubenswrapper[30420]: I0318 10:27:37.351246 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mp5s6/must-gather-bknkh"] Mar 18 10:27:37.424727 master-0 kubenswrapper[30420]: I0318 10:27:37.424669 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx74t\" (UniqueName: \"kubernetes.io/projected/66bcef0f-4b5c-4290-b9cc-f5e121b30553-kube-api-access-vx74t\") pod \"must-gather-rbzdd\" (UID: \"66bcef0f-4b5c-4290-b9cc-f5e121b30553\") " pod="openshift-must-gather-mp5s6/must-gather-rbzdd" Mar 18 10:27:37.425036 master-0 kubenswrapper[30420]: I0318 10:27:37.425022 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/66bcef0f-4b5c-4290-b9cc-f5e121b30553-must-gather-output\") pod \"must-gather-rbzdd\" (UID: \"66bcef0f-4b5c-4290-b9cc-f5e121b30553\") " pod="openshift-must-gather-mp5s6/must-gather-rbzdd" Mar 18 10:27:37.425184 master-0 kubenswrapper[30420]: I0318 10:27:37.425171 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4500427c-b161-4f81-a0ca-9bd2005b33d3-must-gather-output\") pod \"must-gather-bknkh\" (UID: \"4500427c-b161-4f81-a0ca-9bd2005b33d3\") " pod="openshift-must-gather-mp5s6/must-gather-bknkh" Mar 18 10:27:37.425302 master-0 kubenswrapper[30420]: I0318 10:27:37.425283 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbcdh\" (UniqueName: \"kubernetes.io/projected/4500427c-b161-4f81-a0ca-9bd2005b33d3-kube-api-access-zbcdh\") pod \"must-gather-bknkh\" (UID: \"4500427c-b161-4f81-a0ca-9bd2005b33d3\") " pod="openshift-must-gather-mp5s6/must-gather-bknkh" Mar 18 10:27:37.532642 master-0 kubenswrapper[30420]: I0318 10:27:37.532594 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx74t\" (UniqueName: \"kubernetes.io/projected/66bcef0f-4b5c-4290-b9cc-f5e121b30553-kube-api-access-vx74t\") pod \"must-gather-rbzdd\" (UID: \"66bcef0f-4b5c-4290-b9cc-f5e121b30553\") " pod="openshift-must-gather-mp5s6/must-gather-rbzdd" Mar 18 10:27:37.533364 master-0 kubenswrapper[30420]: I0318 10:27:37.533348 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/66bcef0f-4b5c-4290-b9cc-f5e121b30553-must-gather-output\") pod \"must-gather-rbzdd\" (UID: \"66bcef0f-4b5c-4290-b9cc-f5e121b30553\") " pod="openshift-must-gather-mp5s6/must-gather-rbzdd" Mar 18 10:27:37.533553 master-0 kubenswrapper[30420]: I0318 10:27:37.533538 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4500427c-b161-4f81-a0ca-9bd2005b33d3-must-gather-output\") pod \"must-gather-bknkh\" (UID: \"4500427c-b161-4f81-a0ca-9bd2005b33d3\") " pod="openshift-must-gather-mp5s6/must-gather-bknkh" Mar 18 10:27:37.533663 master-0 kubenswrapper[30420]: I0318 10:27:37.533649 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbcdh\" (UniqueName: \"kubernetes.io/projected/4500427c-b161-4f81-a0ca-9bd2005b33d3-kube-api-access-zbcdh\") pod \"must-gather-bknkh\" (UID: \"4500427c-b161-4f81-a0ca-9bd2005b33d3\") " pod="openshift-must-gather-mp5s6/must-gather-bknkh" Mar 18 10:27:37.534482 master-0 kubenswrapper[30420]: I0318 10:27:37.534451 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/66bcef0f-4b5c-4290-b9cc-f5e121b30553-must-gather-output\") pod \"must-gather-rbzdd\" (UID: \"66bcef0f-4b5c-4290-b9cc-f5e121b30553\") " pod="openshift-must-gather-mp5s6/must-gather-rbzdd" Mar 18 10:27:37.534965 master-0 kubenswrapper[30420]: I0318 10:27:37.534940 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4500427c-b161-4f81-a0ca-9bd2005b33d3-must-gather-output\") pod \"must-gather-bknkh\" (UID: \"4500427c-b161-4f81-a0ca-9bd2005b33d3\") " pod="openshift-must-gather-mp5s6/must-gather-bknkh" Mar 18 10:27:37.554887 master-0 kubenswrapper[30420]: I0318 10:27:37.554720 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx74t\" (UniqueName: \"kubernetes.io/projected/66bcef0f-4b5c-4290-b9cc-f5e121b30553-kube-api-access-vx74t\") pod \"must-gather-rbzdd\" (UID: \"66bcef0f-4b5c-4290-b9cc-f5e121b30553\") " pod="openshift-must-gather-mp5s6/must-gather-rbzdd" Mar 18 10:27:37.557632 master-0 kubenswrapper[30420]: I0318 10:27:37.557596 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbcdh\" (UniqueName: \"kubernetes.io/projected/4500427c-b161-4f81-a0ca-9bd2005b33d3-kube-api-access-zbcdh\") pod \"must-gather-bknkh\" (UID: \"4500427c-b161-4f81-a0ca-9bd2005b33d3\") " pod="openshift-must-gather-mp5s6/must-gather-bknkh" Mar 18 10:27:37.649566 master-0 kubenswrapper[30420]: I0318 10:27:37.649426 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mp5s6/must-gather-rbzdd" Mar 18 10:27:37.673105 master-0 kubenswrapper[30420]: I0318 10:27:37.673013 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mp5s6/must-gather-bknkh" Mar 18 10:27:38.102754 master-0 kubenswrapper[30420]: I0318 10:27:38.102698 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mp5s6/must-gather-bknkh"] Mar 18 10:27:38.112430 master-0 kubenswrapper[30420]: W0318 10:27:38.112381 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4500427c_b161_4f81_a0ca_9bd2005b33d3.slice/crio-fe4ad9b94b0e817c8e1bf7d44c9e5f3dca1d40b9433e7ea2cba233835ec0ba10 WatchSource:0}: Error finding container fe4ad9b94b0e817c8e1bf7d44c9e5f3dca1d40b9433e7ea2cba233835ec0ba10: Status 404 returned error can't find the container with id fe4ad9b94b0e817c8e1bf7d44c9e5f3dca1d40b9433e7ea2cba233835ec0ba10 Mar 18 10:27:38.114531 master-0 kubenswrapper[30420]: I0318 10:27:38.114490 30420 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 10:27:38.778473 master-0 kubenswrapper[30420]: I0318 10:27:38.776959 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mp5s6/must-gather-rbzdd"] Mar 18 10:27:38.784639 master-0 kubenswrapper[30420]: W0318 10:27:38.784578 30420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66bcef0f_4b5c_4290_b9cc_f5e121b30553.slice/crio-cb00daa45f2494b53b3b2f88cddde26ab5e1ea871e2c053a909edb83ab102d81 WatchSource:0}: Error finding container cb00daa45f2494b53b3b2f88cddde26ab5e1ea871e2c053a909edb83ab102d81: Status 404 returned error can't find the container with id cb00daa45f2494b53b3b2f88cddde26ab5e1ea871e2c053a909edb83ab102d81 Mar 18 10:27:38.901301 master-0 kubenswrapper[30420]: I0318 10:27:38.901229 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mp5s6/must-gather-rbzdd" event={"ID":"66bcef0f-4b5c-4290-b9cc-f5e121b30553","Type":"ContainerStarted","Data":"cb00daa45f2494b53b3b2f88cddde26ab5e1ea871e2c053a909edb83ab102d81"} Mar 18 10:27:38.903443 master-0 kubenswrapper[30420]: I0318 10:27:38.903393 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mp5s6/must-gather-bknkh" event={"ID":"4500427c-b161-4f81-a0ca-9bd2005b33d3","Type":"ContainerStarted","Data":"fe4ad9b94b0e817c8e1bf7d44c9e5f3dca1d40b9433e7ea2cba233835ec0ba10"} Mar 18 10:27:40.925092 master-0 kubenswrapper[30420]: I0318 10:27:40.925027 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mp5s6/must-gather-rbzdd" event={"ID":"66bcef0f-4b5c-4290-b9cc-f5e121b30553","Type":"ContainerStarted","Data":"3c41e15960ca93daa92db97b6587862bb84ef736b2fbf3181d3ffc2040ef7c0a"} Mar 18 10:27:41.943904 master-0 kubenswrapper[30420]: I0318 10:27:41.943155 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mp5s6/must-gather-rbzdd" event={"ID":"66bcef0f-4b5c-4290-b9cc-f5e121b30553","Type":"ContainerStarted","Data":"d4e6b67d651ef61cf031df640a536fffd736ff9b75181d0eecd72f63689c2658"} Mar 18 10:27:41.982472 master-0 kubenswrapper[30420]: I0318 10:27:41.973794 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mp5s6/must-gather-rbzdd" podStartSLOduration=3.686839826 podStartE2EDuration="4.973768235s" podCreationTimestamp="2026-03-18 10:27:37 +0000 UTC" firstStartedPulling="2026-03-18 10:27:38.787418395 +0000 UTC m=+1022.840164324" lastFinishedPulling="2026-03-18 10:27:40.074346804 +0000 UTC m=+1024.127092733" observedRunningTime="2026-03-18 10:27:41.965106537 +0000 UTC m=+1026.017852466" watchObservedRunningTime="2026-03-18 10:27:41.973768235 +0000 UTC m=+1026.026514164" Mar 18 10:27:43.307997 master-0 kubenswrapper[30420]: I0318 10:27:43.306810 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-9nd2s_432f611b-a1a2-4cc9-b005-17a16413d281/cluster-version-operator/1.log" Mar 18 10:27:43.800706 master-0 kubenswrapper[30420]: I0318 10:27:43.800653 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-9nd2s_432f611b-a1a2-4cc9-b005-17a16413d281/cluster-version-operator/0.log" Mar 18 10:27:47.931229 master-0 kubenswrapper[30420]: I0318 10:27:47.931172 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-w62wg_df659e73-45f8-4601-a966-c2de80fd6ba2/controller/0.log" Mar 18 10:27:48.607332 master-0 kubenswrapper[30420]: I0318 10:27:48.607277 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-w62wg_df659e73-45f8-4601-a966-c2de80fd6ba2/kube-rbac-proxy/0.log" Mar 18 10:27:48.710123 master-0 kubenswrapper[30420]: I0318 10:27:48.709891 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/controller/0.log" Mar 18 10:27:48.770989 master-0 kubenswrapper[30420]: I0318 10:27:48.770207 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/frr/0.log" Mar 18 10:27:48.892961 master-0 kubenswrapper[30420]: I0318 10:27:48.883737 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/reloader/0.log" Mar 18 10:27:48.905112 master-0 kubenswrapper[30420]: I0318 10:27:48.905063 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/frr-metrics/0.log" Mar 18 10:27:48.920945 master-0 kubenswrapper[30420]: I0318 10:27:48.920890 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/kube-rbac-proxy/0.log" Mar 18 10:27:48.947702 master-0 kubenswrapper[30420]: I0318 10:27:48.947655 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-64gfl_f9b32010-f07f-4e5a-a7a6-10e14dd65d91/nmstate-console-plugin/0.log" Mar 18 10:27:48.960413 master-0 kubenswrapper[30420]: I0318 10:27:48.948249 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/kube-rbac-proxy-frr/0.log" Mar 18 10:27:48.961125 master-0 kubenswrapper[30420]: I0318 10:27:48.960936 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/cp-frr-files/0.log" Mar 18 10:27:48.964385 master-0 kubenswrapper[30420]: I0318 10:27:48.964327 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/cp-reloader/0.log" Mar 18 10:27:48.985451 master-0 kubenswrapper[30420]: I0318 10:27:48.985403 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-clbx9_49f160ff-7093-4d65-99b5-51ea63e10306/nmstate-handler/0.log" Mar 18 10:27:49.001175 master-0 kubenswrapper[30420]: I0318 10:27:49.001137 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/cp-metrics/0.log" Mar 18 10:27:49.013854 master-0 kubenswrapper[30420]: I0318 10:27:49.011906 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-v4pqg_feb29aaa-6472-498d-9362-9f56312e248a/nmstate-metrics/0.log" Mar 18 10:27:49.021361 master-0 kubenswrapper[30420]: I0318 10:27:49.021318 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-nqcwb_0223b511-6041-4268-9c8a-079924b86793/frr-k8s-webhook-server/0.log" Mar 18 10:27:49.029315 master-0 kubenswrapper[30420]: I0318 10:27:49.028889 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-v4pqg_feb29aaa-6472-498d-9362-9f56312e248a/kube-rbac-proxy/0.log" Mar 18 10:27:49.056573 master-0 kubenswrapper[30420]: I0318 10:27:49.056526 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-564bb7959-qgbm2_85edf76a-b718-42ae-b899-54a0f53cf836/manager/0.log" Mar 18 10:27:49.069127 master-0 kubenswrapper[30420]: I0318 10:27:49.068665 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bff698c48-vrvtb_7972836b-9e15-4fdd-8408-e1ca80deaeef/webhook-server/0.log" Mar 18 10:27:49.085843 master-0 kubenswrapper[30420]: I0318 10:27:49.083627 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-7z65r_9ac1f807-09b8-4fd1-be56-682238c80007/nmstate-operator/0.log" Mar 18 10:27:49.112400 master-0 kubenswrapper[30420]: I0318 10:27:49.112367 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-bfbvm_8d3ab44d-452a-4080-b985-0e24d2d5bf5d/nmstate-webhook/0.log" Mar 18 10:27:49.187954 master-0 kubenswrapper[30420]: I0318 10:27:49.187271 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nv5ns_9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3/speaker/0.log" Mar 18 10:27:49.198764 master-0 kubenswrapper[30420]: I0318 10:27:49.198669 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nv5ns_9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3/kube-rbac-proxy/0.log" Mar 18 10:27:50.038147 master-0 kubenswrapper[30420]: I0318 10:27:50.038090 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mp5s6/must-gather-bknkh" event={"ID":"4500427c-b161-4f81-a0ca-9bd2005b33d3","Type":"ContainerStarted","Data":"b7eb268986354a8bd7c0623aa1a3d2dc48fdd30ada86c70b0ccbba0f9271aa5a"} Mar 18 10:27:50.038147 master-0 kubenswrapper[30420]: I0318 10:27:50.038151 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mp5s6/must-gather-bknkh" event={"ID":"4500427c-b161-4f81-a0ca-9bd2005b33d3","Type":"ContainerStarted","Data":"32b47d27e6b6f56727c152395016abc0aa9e3a69b137441386087bf15ac2b2f5"} Mar 18 10:27:50.424590 master-0 kubenswrapper[30420]: I0318 10:27:50.424490 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 10:27:50.608842 master-0 kubenswrapper[30420]: I0318 10:27:50.608768 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 10:27:50.622843 master-0 kubenswrapper[30420]: I0318 10:27:50.622619 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 10:27:50.638777 master-0 kubenswrapper[30420]: I0318 10:27:50.638532 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 10:27:50.649663 master-0 kubenswrapper[30420]: I0318 10:27:50.649618 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 10:27:50.667298 master-0 kubenswrapper[30420]: I0318 10:27:50.667241 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 10:27:50.680129 master-0 kubenswrapper[30420]: I0318 10:27:50.680030 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 10:27:50.691537 master-0 kubenswrapper[30420]: I0318 10:27:50.691490 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 10:27:50.746904 master-0 kubenswrapper[30420]: I0318 10:27:50.746502 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_be8bd84c-8035-4bec-a725-b0ae89382c0f/installer/0.log" Mar 18 10:27:50.807848 master-0 kubenswrapper[30420]: I0318 10:27:50.807189 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_87a8662e-66f1-4aee-9344-564bb4ac4f9a/installer/0.log" Mar 18 10:27:51.736098 master-0 kubenswrapper[30420]: I0318 10:27:51.736055 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-ttq68_2cda3479-c3ed-4d79-bbd3-888e64b328f7/assisted-installer-controller/0.log" Mar 18 10:27:51.790991 master-0 kubenswrapper[30420]: I0318 10:27:51.790939 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-754d5d5989-q9cdp_dba62b67-572b-4250-a7de-1a092edd4c68/oauth-openshift/0.log" Mar 18 10:27:52.756390 master-0 kubenswrapper[30420]: I0318 10:27:52.756340 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-4q9tr_f076eaf0-b041-4db0-ba06-3d85e23bb654/authentication-operator/2.log" Mar 18 10:27:52.778870 master-0 kubenswrapper[30420]: I0318 10:27:52.778652 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-4q9tr_f076eaf0-b041-4db0-ba06-3d85e23bb654/authentication-operator/3.log" Mar 18 10:27:53.522384 master-0 kubenswrapper[30420]: I0318 10:27:53.522330 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-82tbk_43d54514-989c-4c82-93f9-153b44eacdd1/router/4.log" Mar 18 10:27:53.524558 master-0 kubenswrapper[30420]: I0318 10:27:53.524528 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-82tbk_43d54514-989c-4c82-93f9-153b44eacdd1/router/3.log" Mar 18 10:27:54.236298 master-0 kubenswrapper[30420]: I0318 10:27:54.236229 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6d58f9cc86-7vcln_8b906fc0-f2bf-4586-97e6-921bbd467b65/oauth-apiserver/0.log" Mar 18 10:27:54.247120 master-0 kubenswrapper[30420]: I0318 10:27:54.247079 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6d58f9cc86-7vcln_8b906fc0-f2bf-4586-97e6-921bbd467b65/fix-audit-permissions/0.log" Mar 18 10:27:54.772100 master-0 kubenswrapper[30420]: I0318 10:27:54.772056 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-mw9tt_9f5c64aa-676e-4e48-b714-02f6edb1d361/kube-rbac-proxy/0.log" Mar 18 10:27:54.795113 master-0 kubenswrapper[30420]: I0318 10:27:54.795071 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-mw9tt_9f5c64aa-676e-4e48-b714-02f6edb1d361/cluster-autoscaler-operator/0.log" Mar 18 10:27:54.799980 master-0 kubenswrapper[30420]: I0318 10:27:54.799943 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-mw9tt_9f5c64aa-676e-4e48-b714-02f6edb1d361/cluster-autoscaler-operator/1.log" Mar 18 10:27:54.819414 master-0 kubenswrapper[30420]: I0318 10:27:54.819358 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/2.log" Mar 18 10:27:54.820058 master-0 kubenswrapper[30420]: I0318 10:27:54.820028 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/3.log" Mar 18 10:27:54.831630 master-0 kubenswrapper[30420]: I0318 10:27:54.831594 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/baremetal-kube-rbac-proxy/0.log" Mar 18 10:27:54.847405 master-0 kubenswrapper[30420]: I0318 10:27:54.847360 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-zcm5j_f88c2a18-11f5-45ef-aff1-3c5976716d85/control-plane-machine-set-operator/1.log" Mar 18 10:27:54.848560 master-0 kubenswrapper[30420]: I0318 10:27:54.848540 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-zcm5j_f88c2a18-11f5-45ef-aff1-3c5976716d85/control-plane-machine-set-operator/0.log" Mar 18 10:27:54.866181 master-0 kubenswrapper[30420]: I0318 10:27:54.866124 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-xnvn9_29fbc78b-1887-40d4-8165-f0f7cc40b583/kube-rbac-proxy/0.log" Mar 18 10:27:54.917267 master-0 kubenswrapper[30420]: I0318 10:27:54.917221 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-xnvn9_29fbc78b-1887-40d4-8165-f0f7cc40b583/machine-api-operator/0.log" Mar 18 10:27:54.918551 master-0 kubenswrapper[30420]: I0318 10:27:54.918534 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-xnvn9_29fbc78b-1887-40d4-8165-f0f7cc40b583/machine-api-operator/1.log" Mar 18 10:27:55.025107 master-0 kubenswrapper[30420]: I0318 10:27:55.024982 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mp5s6/must-gather-bknkh" podStartSLOduration=7.226320694 podStartE2EDuration="18.024966152s" podCreationTimestamp="2026-03-18 10:27:37 +0000 UTC" firstStartedPulling="2026-03-18 10:27:38.114426215 +0000 UTC m=+1022.167172154" lastFinishedPulling="2026-03-18 10:27:48.913071683 +0000 UTC m=+1032.965817612" observedRunningTime="2026-03-18 10:27:50.10715326 +0000 UTC m=+1034.159899209" watchObservedRunningTime="2026-03-18 10:27:55.024966152 +0000 UTC m=+1039.077712081" Mar 18 10:27:55.030251 master-0 kubenswrapper[30420]: I0318 10:27:55.030209 30420 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm"] Mar 18 10:27:55.031529 master-0 kubenswrapper[30420]: I0318 10:27:55.031508 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.053454 master-0 kubenswrapper[30420]: I0318 10:27:55.053393 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm"] Mar 18 10:27:55.125047 master-0 kubenswrapper[30420]: I0318 10:27:55.124926 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-sys\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.125047 master-0 kubenswrapper[30420]: I0318 10:27:55.124986 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-proc\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.125047 master-0 kubenswrapper[30420]: I0318 10:27:55.125022 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mldnj\" (UniqueName: \"kubernetes.io/projected/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-kube-api-access-mldnj\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.125047 master-0 kubenswrapper[30420]: I0318 10:27:55.125056 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-podres\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.125365 master-0 kubenswrapper[30420]: I0318 10:27:55.125088 30420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-lib-modules\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.226655 master-0 kubenswrapper[30420]: I0318 10:27:55.226599 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-sys\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.226929 master-0 kubenswrapper[30420]: I0318 10:27:55.226673 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-proc\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.226990 master-0 kubenswrapper[30420]: I0318 10:27:55.226933 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-proc\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.227040 master-0 kubenswrapper[30420]: I0318 10:27:55.226985 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mldnj\" (UniqueName: \"kubernetes.io/projected/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-kube-api-access-mldnj\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.227203 master-0 kubenswrapper[30420]: I0318 10:27:55.227137 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-podres\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.227297 master-0 kubenswrapper[30420]: I0318 10:27:55.227002 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-sys\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.227297 master-0 kubenswrapper[30420]: I0318 10:27:55.227244 30420 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-lib-modules\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.227396 master-0 kubenswrapper[30420]: I0318 10:27:55.227309 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-lib-modules\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.227396 master-0 kubenswrapper[30420]: I0318 10:27:55.227377 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-podres\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.246056 master-0 kubenswrapper[30420]: I0318 10:27:55.245998 30420 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mldnj\" (UniqueName: \"kubernetes.io/projected/71c6a9f8-56cf-4786-809a-c5fbbfe30deb-kube-api-access-mldnj\") pod \"perf-node-gather-daemonset-cxtwm\" (UID: \"71c6a9f8-56cf-4786-809a-c5fbbfe30deb\") " pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.346977 master-0 kubenswrapper[30420]: I0318 10:27:55.346912 30420 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:55.857643 master-0 kubenswrapper[30420]: I0318 10:27:55.856201 30420 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm"] Mar 18 10:27:56.016609 master-0 kubenswrapper[30420]: I0318 10:27:56.016555 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-w62wg_df659e73-45f8-4601-a966-c2de80fd6ba2/controller/0.log" Mar 18 10:27:56.027936 master-0 kubenswrapper[30420]: I0318 10:27:56.027890 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-w62wg_df659e73-45f8-4601-a966-c2de80fd6ba2/kube-rbac-proxy/0.log" Mar 18 10:27:56.048619 master-0 kubenswrapper[30420]: I0318 10:27:56.048559 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/controller/0.log" Mar 18 10:27:56.083421 master-0 kubenswrapper[30420]: I0318 10:27:56.083361 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/frr/0.log" Mar 18 10:27:56.092403 master-0 kubenswrapper[30420]: I0318 10:27:56.092345 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/reloader/0.log" Mar 18 10:27:56.102632 master-0 kubenswrapper[30420]: I0318 10:27:56.102579 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/cluster-cloud-controller-manager/0.log" Mar 18 10:27:56.105649 master-0 kubenswrapper[30420]: I0318 10:27:56.105617 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/cluster-cloud-controller-manager/1.log" Mar 18 10:27:56.106748 master-0 kubenswrapper[30420]: I0318 10:27:56.106721 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/frr-metrics/0.log" Mar 18 10:27:56.108100 master-0 kubenswrapper[30420]: I0318 10:27:56.107995 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" event={"ID":"71c6a9f8-56cf-4786-809a-c5fbbfe30deb","Type":"ContainerStarted","Data":"253121aef0234828e2f14ab47131407cf20f0ff2470185efa1fe0747ddd13ac7"} Mar 18 10:27:56.113435 master-0 kubenswrapper[30420]: I0318 10:27:56.113389 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/kube-rbac-proxy/0.log" Mar 18 10:27:56.116755 master-0 kubenswrapper[30420]: I0318 10:27:56.116710 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/config-sync-controllers/0.log" Mar 18 10:27:56.117294 master-0 kubenswrapper[30420]: I0318 10:27:56.117263 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/config-sync-controllers/1.log" Mar 18 10:27:56.119895 master-0 kubenswrapper[30420]: I0318 10:27:56.119691 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/kube-rbac-proxy-frr/0.log" Mar 18 10:27:56.126607 master-0 kubenswrapper[30420]: I0318 10:27:56.126561 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/cp-frr-files/0.log" Mar 18 10:27:56.132075 master-0 kubenswrapper[30420]: I0318 10:27:56.132021 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-rpxn4_8641c1d1-dd79-4f1f-9343-52d1ee6faf9f/kube-rbac-proxy/0.log" Mar 18 10:27:56.132758 master-0 kubenswrapper[30420]: I0318 10:27:56.132717 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/cp-reloader/0.log" Mar 18 10:27:56.144863 master-0 kubenswrapper[30420]: I0318 10:27:56.144801 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/cp-metrics/0.log" Mar 18 10:27:56.158792 master-0 kubenswrapper[30420]: I0318 10:27:56.158744 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-nqcwb_0223b511-6041-4268-9c8a-079924b86793/frr-k8s-webhook-server/0.log" Mar 18 10:27:56.195616 master-0 kubenswrapper[30420]: I0318 10:27:56.195573 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-564bb7959-qgbm2_85edf76a-b718-42ae-b899-54a0f53cf836/manager/0.log" Mar 18 10:27:56.301912 master-0 kubenswrapper[30420]: I0318 10:27:56.301841 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bff698c48-vrvtb_7972836b-9e15-4fdd-8408-e1ca80deaeef/webhook-server/0.log" Mar 18 10:27:56.513060 master-0 kubenswrapper[30420]: I0318 10:27:56.510150 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nv5ns_9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3/speaker/0.log" Mar 18 10:27:56.516216 master-0 kubenswrapper[30420]: I0318 10:27:56.516179 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nv5ns_9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3/kube-rbac-proxy/0.log" Mar 18 10:27:57.119050 master-0 kubenswrapper[30420]: I0318 10:27:57.118996 30420 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" event={"ID":"71c6a9f8-56cf-4786-809a-c5fbbfe30deb","Type":"ContainerStarted","Data":"ed38238db046e3e520b75bece753e1c26eaaa67fce2e89409c8dc38d9448ef63"} Mar 18 10:27:57.119400 master-0 kubenswrapper[30420]: I0318 10:27:57.119186 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:27:58.197027 master-0 kubenswrapper[30420]: I0318 10:27:58.196987 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-rtnkl_caec44dc-aab7-4407-b34a-52bbe4b4f635/kube-rbac-proxy/0.log" Mar 18 10:27:58.225100 master-0 kubenswrapper[30420]: I0318 10:27:58.225029 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-rtnkl_caec44dc-aab7-4407-b34a-52bbe4b4f635/cloud-credential-operator/0.log" Mar 18 10:27:59.301979 master-0 kubenswrapper[30420]: I0318 10:27:59.301921 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mpdps_d1523b24-886d-45df-8a0c-bea036940823/registry-server/0.log" Mar 18 10:27:59.369250 master-0 kubenswrapper[30420]: I0318 10:27:59.369207 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-495pg_0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/openshift-config-operator/3.log" Mar 18 10:27:59.370418 master-0 kubenswrapper[30420]: I0318 10:27:59.370391 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-495pg_0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/openshift-config-operator/4.log" Mar 18 10:27:59.380391 master-0 kubenswrapper[30420]: I0318 10:27:59.380349 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-495pg_0ed4e3d9-be0c-435a-82d7-5d2fa7b6d480/openshift-api/0.log" Mar 18 10:28:00.109186 master-0 kubenswrapper[30420]: I0318 10:28:00.109148 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/2.log" Mar 18 10:28:00.121307 master-0 kubenswrapper[30420]: I0318 10:28:00.121248 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cmj6p_25a8ccb6-ea69-45bf-b460-1b887c5b3f22/console-operator/3.log" Mar 18 10:28:00.614889 master-0 kubenswrapper[30420]: I0318 10:28:00.614804 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79d8975cbd-5smbb_a42b137a-07fc-4146-8b2e-086c398dccef/console/0.log" Mar 18 10:28:00.638098 master-0 kubenswrapper[30420]: I0318 10:28:00.638059 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-66b8ffb895-wg4k5_5c77e26d-a46a-4552-88b8-2c8e3473437e/download-server/0.log" Mar 18 10:28:01.303176 master-0 kubenswrapper[30420]: I0318 10:28:01.303094 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-4kr54_29490aed-9c97-42d1-94c8-44d1de13b70c/cluster-storage-operator/0.log" Mar 18 10:28:01.303538 master-0 kubenswrapper[30420]: I0318 10:28:01.303516 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-4kr54_29490aed-9c97-42d1-94c8-44d1de13b70c/cluster-storage-operator/1.log" Mar 18 10:28:01.320638 master-0 kubenswrapper[30420]: I0318 10:28:01.320584 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/3.log" Mar 18 10:28:01.321286 master-0 kubenswrapper[30420]: I0318 10:28:01.321251 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-2l6cq_932a70df-3afe-4873-9449-ab6e061d3fe3/snapshot-controller/4.log" Mar 18 10:28:01.344173 master-0 kubenswrapper[30420]: I0318 10:28:01.344115 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-mqbmq_8e812dd9-cd05-4e9e-8710-d0920181ece2/csi-snapshot-controller-operator/1.log" Mar 18 10:28:01.348132 master-0 kubenswrapper[30420]: I0318 10:28:01.348087 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-mqbmq_8e812dd9-cd05-4e9e-8710-d0920181ece2/csi-snapshot-controller-operator/0.log" Mar 18 10:28:01.851123 master-0 kubenswrapper[30420]: I0318 10:28:01.851064 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-jrmkr_8cb5158f-2199-42c0-995a-8490c9ec8a95/dns-operator/0.log" Mar 18 10:28:01.863299 master-0 kubenswrapper[30420]: I0318 10:28:01.863247 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-jrmkr_8cb5158f-2199-42c0-995a-8490c9ec8a95/kube-rbac-proxy/0.log" Mar 18 10:28:02.364341 master-0 kubenswrapper[30420]: I0318 10:28:02.364298 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z9sf5_da04c6fa-4916-4bed-a6b2-cc92bf2ee379/dns/0.log" Mar 18 10:28:02.376752 master-0 kubenswrapper[30420]: I0318 10:28:02.376714 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-z9sf5_da04c6fa-4916-4bed-a6b2-cc92bf2ee379/kube-rbac-proxy/0.log" Mar 18 10:28:02.389727 master-0 kubenswrapper[30420]: I0318 10:28:02.389688 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-hjpz8_e8d3cf68-ed97-45b9-8c83-b42bb1f789fc/dns-node-resolver/0.log" Mar 18 10:28:02.835503 master-0 kubenswrapper[30420]: I0318 10:28:02.835458 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/3.log" Mar 18 10:28:02.841399 master-0 kubenswrapper[30420]: I0318 10:28:02.841350 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-4tlnm_a078565a-6970-4f42-84f4-938f1d637245/etcd-operator/2.log" Mar 18 10:28:03.390601 master-0 kubenswrapper[30420]: I0318 10:28:03.390554 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 10:28:03.596340 master-0 kubenswrapper[30420]: I0318 10:28:03.596296 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 10:28:03.609566 master-0 kubenswrapper[30420]: I0318 10:28:03.609528 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 10:28:03.618181 master-0 kubenswrapper[30420]: I0318 10:28:03.618137 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 10:28:03.629348 master-0 kubenswrapper[30420]: I0318 10:28:03.629296 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 10:28:03.642884 master-0 kubenswrapper[30420]: I0318 10:28:03.642764 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 10:28:03.653605 master-0 kubenswrapper[30420]: I0318 10:28:03.653567 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 10:28:03.663114 master-0 kubenswrapper[30420]: I0318 10:28:03.663083 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 10:28:03.719049 master-0 kubenswrapper[30420]: I0318 10:28:03.719001 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_be8bd84c-8035-4bec-a725-b0ae89382c0f/installer/0.log" Mar 18 10:28:03.779792 master-0 kubenswrapper[30420]: I0318 10:28:03.779719 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_87a8662e-66f1-4aee-9344-564bb4ac4f9a/installer/0.log" Mar 18 10:28:04.405057 master-0 kubenswrapper[30420]: I0318 10:28:04.405014 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-f4f7m_8ee99294-4785-49d0-b493-0d734cf09396/cluster-image-registry-operator/1.log" Mar 18 10:28:04.407029 master-0 kubenswrapper[30420]: I0318 10:28:04.406991 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-f4f7m_8ee99294-4785-49d0-b493-0d734cf09396/cluster-image-registry-operator/0.log" Mar 18 10:28:04.417438 master-0 kubenswrapper[30420]: I0318 10:28:04.417388 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-mh2fn_9fba458a-8c86-4d0a-8efb-266a84f62a9a/node-ca/0.log" Mar 18 10:28:04.915841 master-0 kubenswrapper[30420]: I0318 10:28:04.915776 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/4.log" Mar 18 10:28:04.918153 master-0 kubenswrapper[30420]: I0318 10:28:04.917971 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/ingress-operator/5.log" Mar 18 10:28:04.927478 master-0 kubenswrapper[30420]: I0318 10:28:04.927424 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-mw9tt_9f5c64aa-676e-4e48-b714-02f6edb1d361/kube-rbac-proxy/0.log" Mar 18 10:28:04.930304 master-0 kubenswrapper[30420]: I0318 10:28:04.930264 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-kr5kz_accc57fb-75f5-4f89-9804-6ede7f77e27c/kube-rbac-proxy/0.log" Mar 18 10:28:04.956202 master-0 kubenswrapper[30420]: I0318 10:28:04.956142 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-mw9tt_9f5c64aa-676e-4e48-b714-02f6edb1d361/cluster-autoscaler-operator/0.log" Mar 18 10:28:04.959339 master-0 kubenswrapper[30420]: I0318 10:28:04.959288 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-mw9tt_9f5c64aa-676e-4e48-b714-02f6edb1d361/cluster-autoscaler-operator/1.log" Mar 18 10:28:04.970079 master-0 kubenswrapper[30420]: I0318 10:28:04.970008 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/2.log" Mar 18 10:28:04.971711 master-0 kubenswrapper[30420]: I0318 10:28:04.971681 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/cluster-baremetal-operator/3.log" Mar 18 10:28:04.979746 master-0 kubenswrapper[30420]: I0318 10:28:04.979653 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-lnq7l_1084562a-20a0-432d-b739-90bc0a4daff2/baremetal-kube-rbac-proxy/0.log" Mar 18 10:28:04.991583 master-0 kubenswrapper[30420]: I0318 10:28:04.991494 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-zcm5j_f88c2a18-11f5-45ef-aff1-3c5976716d85/control-plane-machine-set-operator/0.log" Mar 18 10:28:04.992100 master-0 kubenswrapper[30420]: I0318 10:28:04.992067 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-zcm5j_f88c2a18-11f5-45ef-aff1-3c5976716d85/control-plane-machine-set-operator/1.log" Mar 18 10:28:05.004216 master-0 kubenswrapper[30420]: I0318 10:28:05.004128 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-xnvn9_29fbc78b-1887-40d4-8165-f0f7cc40b583/kube-rbac-proxy/0.log" Mar 18 10:28:05.012986 master-0 kubenswrapper[30420]: I0318 10:28:05.012935 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-xnvn9_29fbc78b-1887-40d4-8165-f0f7cc40b583/machine-api-operator/0.log" Mar 18 10:28:05.014987 master-0 kubenswrapper[30420]: I0318 10:28:05.014931 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-xnvn9_29fbc78b-1887-40d4-8165-f0f7cc40b583/machine-api-operator/1.log" Mar 18 10:28:05.376499 master-0 kubenswrapper[30420]: I0318 10:28:05.376451 30420 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" Mar 18 10:28:05.407317 master-0 kubenswrapper[30420]: I0318 10:28:05.407185 30420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mp5s6/perf-node-gather-daemonset-cxtwm" podStartSLOduration=11.407153853 podStartE2EDuration="11.407153853s" podCreationTimestamp="2026-03-18 10:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 10:27:57.365065689 +0000 UTC m=+1041.417811648" watchObservedRunningTime="2026-03-18 10:28:05.407153853 +0000 UTC m=+1049.459899782" Mar 18 10:28:05.603969 master-0 kubenswrapper[30420]: I0318 10:28:05.603916 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-rzksb_74476be5-669a-4737-b93b-c4870423a4da/serve-healthcheck-canary/0.log" Mar 18 10:28:06.104430 master-0 kubenswrapper[30420]: I0318 10:28:06.104231 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-bdcw7_71755097-7543-48f8-8925-0e21650bf8f6/insights-operator/0.log" Mar 18 10:28:07.507697 master-0 kubenswrapper[30420]: I0318 10:28:07.507559 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_c4abc917-fc2d-4957-9270-86bb310ecf75/alertmanager/0.log" Mar 18 10:28:07.545244 master-0 kubenswrapper[30420]: I0318 10:28:07.545179 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_c4abc917-fc2d-4957-9270-86bb310ecf75/config-reloader/0.log" Mar 18 10:28:07.678390 master-0 kubenswrapper[30420]: I0318 10:28:07.678351 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_c4abc917-fc2d-4957-9270-86bb310ecf75/kube-rbac-proxy-web/0.log" Mar 18 10:28:07.695902 master-0 kubenswrapper[30420]: I0318 10:28:07.695867 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_c4abc917-fc2d-4957-9270-86bb310ecf75/kube-rbac-proxy/0.log" Mar 18 10:28:07.720346 master-0 kubenswrapper[30420]: I0318 10:28:07.720102 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_c4abc917-fc2d-4957-9270-86bb310ecf75/kube-rbac-proxy-metric/0.log" Mar 18 10:28:07.734034 master-0 kubenswrapper[30420]: I0318 10:28:07.733997 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_c4abc917-fc2d-4957-9270-86bb310ecf75/prom-label-proxy/0.log" Mar 18 10:28:07.745095 master-0 kubenswrapper[30420]: I0318 10:28:07.745034 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_c4abc917-fc2d-4957-9270-86bb310ecf75/init-config-reloader/0.log" Mar 18 10:28:07.785632 master-0 kubenswrapper[30420]: I0318 10:28:07.785594 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-58845fbb57-8kx9m_f69a00b6-d908-4485-bb0d-57594fc01d24/cluster-monitoring-operator/0.log" Mar 18 10:28:07.803468 master-0 kubenswrapper[30420]: I0318 10:28:07.803393 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-8tbkg_5900a401-21c2-47f0-a921-47c648da558d/kube-state-metrics/0.log" Mar 18 10:28:07.818486 master-0 kubenswrapper[30420]: I0318 10:28:07.818428 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-8tbkg_5900a401-21c2-47f0-a921-47c648da558d/kube-rbac-proxy-main/0.log" Mar 18 10:28:07.834059 master-0 kubenswrapper[30420]: I0318 10:28:07.834018 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-8tbkg_5900a401-21c2-47f0-a921-47c648da558d/kube-rbac-proxy-self/0.log" Mar 18 10:28:07.849881 master-0 kubenswrapper[30420]: I0318 10:28:07.849783 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-7d8bb64c78-vvvft_2fb70bb5-3d3d-4abb-8f24-433e65792845/metrics-server/0.log" Mar 18 10:28:07.864606 master-0 kubenswrapper[30420]: I0318 10:28:07.864557 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-64659f7487-wmtsx_2d4d730b-875c-4f6f-92b7-3c0e1035fdd6/monitoring-plugin/0.log" Mar 18 10:28:07.887054 master-0 kubenswrapper[30420]: I0318 10:28:07.887010 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-l9q9t_1cb8ab19-0564-4182-a7e3-0943c1480663/node-exporter/0.log" Mar 18 10:28:07.906466 master-0 kubenswrapper[30420]: I0318 10:28:07.906428 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-l9q9t_1cb8ab19-0564-4182-a7e3-0943c1480663/kube-rbac-proxy/0.log" Mar 18 10:28:07.922464 master-0 kubenswrapper[30420]: I0318 10:28:07.922413 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-l9q9t_1cb8ab19-0564-4182-a7e3-0943c1480663/init-textfile/0.log" Mar 18 10:28:07.942386 master-0 kubenswrapper[30420]: I0318 10:28:07.942331 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-6rrn7_af1bbeee-1faf-43d1-943f-ee5319cef4e9/kube-rbac-proxy-main/0.log" Mar 18 10:28:07.957899 master-0 kubenswrapper[30420]: I0318 10:28:07.957861 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-6rrn7_af1bbeee-1faf-43d1-943f-ee5319cef4e9/kube-rbac-proxy-self/0.log" Mar 18 10:28:07.980778 master-0 kubenswrapper[30420]: I0318 10:28:07.980725 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-6rrn7_af1bbeee-1faf-43d1-943f-ee5319cef4e9/openshift-state-metrics/0.log" Mar 18 10:28:08.019155 master-0 kubenswrapper[30420]: I0318 10:28:08.019114 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2941d21d-0c38-4037-87ed-ebd188ed5f9f/prometheus/0.log" Mar 18 10:28:08.035366 master-0 kubenswrapper[30420]: I0318 10:28:08.035322 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2941d21d-0c38-4037-87ed-ebd188ed5f9f/config-reloader/0.log" Mar 18 10:28:08.047046 master-0 kubenswrapper[30420]: I0318 10:28:08.046950 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2941d21d-0c38-4037-87ed-ebd188ed5f9f/thanos-sidecar/0.log" Mar 18 10:28:08.059926 master-0 kubenswrapper[30420]: I0318 10:28:08.059866 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2941d21d-0c38-4037-87ed-ebd188ed5f9f/kube-rbac-proxy-web/0.log" Mar 18 10:28:08.071613 master-0 kubenswrapper[30420]: I0318 10:28:08.071568 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2941d21d-0c38-4037-87ed-ebd188ed5f9f/kube-rbac-proxy/0.log" Mar 18 10:28:08.081851 master-0 kubenswrapper[30420]: I0318 10:28:08.081804 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2941d21d-0c38-4037-87ed-ebd188ed5f9f/kube-rbac-proxy-thanos/0.log" Mar 18 10:28:08.097101 master-0 kubenswrapper[30420]: I0318 10:28:08.097059 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2941d21d-0c38-4037-87ed-ebd188ed5f9f/init-config-reloader/0.log" Mar 18 10:28:08.114259 master-0 kubenswrapper[30420]: I0318 10:28:08.114208 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-886k6_9cfd2323-c33a-4d80-9c25-710920c0e605/prometheus-operator/0.log" Mar 18 10:28:08.138937 master-0 kubenswrapper[30420]: I0318 10:28:08.138896 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-886k6_9cfd2323-c33a-4d80-9c25-710920c0e605/kube-rbac-proxy/0.log" Mar 18 10:28:08.153352 master-0 kubenswrapper[30420]: I0318 10:28:08.153299 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-69c6b55594-4wcqx_582d2ba8-1210-47d0-a530-0b20b2fdde22/prometheus-operator-admission-webhook/0.log" Mar 18 10:28:08.175871 master-0 kubenswrapper[30420]: I0318 10:28:08.175813 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-585cb8cdb6-g2jjm_aa4cba67-b5d4-46c2-8cad-1a1379f764cb/telemeter-client/0.log" Mar 18 10:28:08.184808 master-0 kubenswrapper[30420]: I0318 10:28:08.184770 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-585cb8cdb6-g2jjm_aa4cba67-b5d4-46c2-8cad-1a1379f764cb/reload/0.log" Mar 18 10:28:08.196524 master-0 kubenswrapper[30420]: I0318 10:28:08.196462 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-585cb8cdb6-g2jjm_aa4cba67-b5d4-46c2-8cad-1a1379f764cb/kube-rbac-proxy/0.log" Mar 18 10:28:08.215335 master-0 kubenswrapper[30420]: I0318 10:28:08.215287 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5cfdd55bb7-8m5wk_274f890d-dc38-4220-98a2-357d86249c63/thanos-query/0.log" Mar 18 10:28:08.223046 master-0 kubenswrapper[30420]: I0318 10:28:08.223003 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5cfdd55bb7-8m5wk_274f890d-dc38-4220-98a2-357d86249c63/kube-rbac-proxy-web/0.log" Mar 18 10:28:08.234333 master-0 kubenswrapper[30420]: I0318 10:28:08.234291 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5cfdd55bb7-8m5wk_274f890d-dc38-4220-98a2-357d86249c63/kube-rbac-proxy/0.log" Mar 18 10:28:08.242490 master-0 kubenswrapper[30420]: I0318 10:28:08.242453 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5cfdd55bb7-8m5wk_274f890d-dc38-4220-98a2-357d86249c63/prom-label-proxy/0.log" Mar 18 10:28:08.255134 master-0 kubenswrapper[30420]: I0318 10:28:08.255090 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5cfdd55bb7-8m5wk_274f890d-dc38-4220-98a2-357d86249c63/kube-rbac-proxy-rules/0.log" Mar 18 10:28:08.264143 master-0 kubenswrapper[30420]: I0318 10:28:08.264085 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5cfdd55bb7-8m5wk_274f890d-dc38-4220-98a2-357d86249c63/kube-rbac-proxy-metrics/0.log" Mar 18 10:28:09.675537 master-0 kubenswrapper[30420]: I0318 10:28:09.675414 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-w62wg_df659e73-45f8-4601-a966-c2de80fd6ba2/controller/0.log" Mar 18 10:28:09.686290 master-0 kubenswrapper[30420]: I0318 10:28:09.686251 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-w62wg_df659e73-45f8-4601-a966-c2de80fd6ba2/kube-rbac-proxy/0.log" Mar 18 10:28:09.709135 master-0 kubenswrapper[30420]: I0318 10:28:09.709093 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/controller/0.log" Mar 18 10:28:09.755305 master-0 kubenswrapper[30420]: I0318 10:28:09.755250 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/frr/0.log" Mar 18 10:28:09.765519 master-0 kubenswrapper[30420]: I0318 10:28:09.765478 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/reloader/0.log" Mar 18 10:28:09.773522 master-0 kubenswrapper[30420]: I0318 10:28:09.773467 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/frr-metrics/0.log" Mar 18 10:28:09.788330 master-0 kubenswrapper[30420]: I0318 10:28:09.788274 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/kube-rbac-proxy/0.log" Mar 18 10:28:09.798604 master-0 kubenswrapper[30420]: I0318 10:28:09.798556 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/kube-rbac-proxy-frr/0.log" Mar 18 10:28:09.811490 master-0 kubenswrapper[30420]: I0318 10:28:09.811446 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/cp-frr-files/0.log" Mar 18 10:28:09.823887 master-0 kubenswrapper[30420]: I0318 10:28:09.823839 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/cp-reloader/0.log" Mar 18 10:28:09.835488 master-0 kubenswrapper[30420]: I0318 10:28:09.835444 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8gm_2011b02b-e4a7-43ac-af50-d30a48d38b1b/cp-metrics/0.log" Mar 18 10:28:09.854650 master-0 kubenswrapper[30420]: I0318 10:28:09.854596 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-nqcwb_0223b511-6041-4268-9c8a-079924b86793/frr-k8s-webhook-server/0.log" Mar 18 10:28:09.884552 master-0 kubenswrapper[30420]: I0318 10:28:09.884484 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-564bb7959-qgbm2_85edf76a-b718-42ae-b899-54a0f53cf836/manager/0.log" Mar 18 10:28:09.900275 master-0 kubenswrapper[30420]: I0318 10:28:09.900230 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bff698c48-vrvtb_7972836b-9e15-4fdd-8408-e1ca80deaeef/webhook-server/0.log" Mar 18 10:28:09.981954 master-0 kubenswrapper[30420]: I0318 10:28:09.981861 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nv5ns_9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3/speaker/0.log" Mar 18 10:28:09.993119 master-0 kubenswrapper[30420]: I0318 10:28:09.993067 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nv5ns_9ed87fd2-c02d-4436-bfcf-2f7dd1094bd3/kube-rbac-proxy/0.log" Mar 18 10:28:10.587459 master-0 kubenswrapper[30420]: I0318 10:28:10.587417 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-zhsw7_7d7e7ff4-332d-434b-9c84-c14686401897/cert-manager-controller/0.log" Mar 18 10:28:10.603663 master-0 kubenswrapper[30420]: I0318 10:28:10.603578 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-4zmkn_4bbc2122-512f-4056-8572-80126bea4f0c/cert-manager-cainjector/0.log" Mar 18 10:28:10.614523 master-0 kubenswrapper[30420]: I0318 10:28:10.614478 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-29ng5_2792fe14-2599-454d-9b93-0587ac7086bd/cert-manager-webhook/0.log" Mar 18 10:28:11.220985 master-0 kubenswrapper[30420]: I0318 10:28:11.220938 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-s7rm6_c2635254-a491-42e5-b598-461c24bf77ca/cluster-node-tuning-operator/1.log" Mar 18 10:28:11.221613 master-0 kubenswrapper[30420]: I0318 10:28:11.221557 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-s7rm6_c2635254-a491-42e5-b598-461c24bf77ca/cluster-node-tuning-operator/0.log" Mar 18 10:28:11.242159 master-0 kubenswrapper[30420]: I0318 10:28:11.242128 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-6rhgt_b0f77d68-f228-4f82-befb-fb2a2ce2e976/tuned/0.log" Mar 18 10:28:11.745335 master-0 kubenswrapper[30420]: I0318 10:28:11.745289 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-vh7xm_afdba306-7371-4b95-aaf9-9398417e1b12/prometheus-operator/0.log" Mar 18 10:28:11.763433 master-0 kubenswrapper[30420]: I0318 10:28:11.763387 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-777fbd5757-czjbr_62cbc290-158f-4399-aeb5-a97661aca61d/prometheus-operator-admission-webhook/0.log" Mar 18 10:28:11.781716 master-0 kubenswrapper[30420]: I0318 10:28:11.781683 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-777fbd5757-rsg7p_a1dd8046-1f18-404b-87df-00c917d1fdc2/prometheus-operator-admission-webhook/0.log" Mar 18 10:28:11.804008 master-0 kubenswrapper[30420]: I0318 10:28:11.803963 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-2csj8_32cdee06-2791-4cac-9447-26fee189be3f/operator/0.log" Mar 18 10:28:11.827004 master-0 kubenswrapper[30420]: I0318 10:28:11.826947 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-646786b97b-ngcvz_32248d17-01fa-4580-90a9-1cff5b20cb66/perses-operator/0.log" Mar 18 10:28:13.173534 master-0 kubenswrapper[30420]: I0318 10:28:13.173469 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-smghb_6a6a616d-012a-479e-ab3d-b21295ea1805/kube-apiserver-operator/2.log" Mar 18 10:28:13.176196 master-0 kubenswrapper[30420]: I0318 10:28:13.176157 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-smghb_6a6a616d-012a-479e-ab3d-b21295ea1805/kube-apiserver-operator/3.log" Mar 18 10:28:13.809102 master-0 kubenswrapper[30420]: I0318 10:28:13.809049 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_5fb70bf3-93cd-4000-be1a-8e21846d5709/installer/0.log" Mar 18 10:28:13.832153 master-0 kubenswrapper[30420]: I0318 10:28:13.832120 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_90db95c5-2017-4b04-b11c-9844947c5be9/installer/0.log" Mar 18 10:28:13.862953 master-0 kubenswrapper[30420]: I0318 10:28:13.862908 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-retry-1-master-0_a3657106-1eea-4031-8c92-85ba6287b425/installer/0.log" Mar 18 10:28:13.893527 master-0 kubenswrapper[30420]: I0318 10:28:13.893450 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-6-master-0_840c140c-d526-45b2-8c25-9df4c4efd602/installer/0.log" Mar 18 10:28:14.065673 master-0 kubenswrapper[30420]: I0318 10:28:14.065564 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver/0.log" Mar 18 10:28:14.089572 master-0 kubenswrapper[30420]: I0318 10:28:14.089540 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-cert-syncer/0.log" Mar 18 10:28:14.108813 master-0 kubenswrapper[30420]: I0318 10:28:14.108761 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-cert-regeneration-controller/0.log" Mar 18 10:28:14.117535 master-0 kubenswrapper[30420]: I0318 10:28:14.117504 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-insecure-readyz/0.log" Mar 18 10:28:14.138423 master-0 kubenswrapper[30420]: I0318 10:28:14.138374 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-check-endpoints/0.log" Mar 18 10:28:14.152131 master-0 kubenswrapper[30420]: I0318 10:28:14.152075 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/setup/0.log" Mar 18 10:28:14.801468 master-0 kubenswrapper[30420]: I0318 10:28:14.801411 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-nq7mw_0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/kube-rbac-proxy/0.log" Mar 18 10:28:14.815012 master-0 kubenswrapper[30420]: I0318 10:28:14.814965 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-nq7mw_0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/manager/1.log" Mar 18 10:28:15.010901 master-0 kubenswrapper[30420]: I0318 10:28:15.010816 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-nq7mw_0876d14e-1fbe-4c09-b4eb-e3d2eb14ab3a/manager/0.log" Mar 18 10:28:15.442435 master-0 kubenswrapper[30420]: I0318 10:28:15.442383 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-zhsw7_7d7e7ff4-332d-434b-9c84-c14686401897/cert-manager-controller/0.log" Mar 18 10:28:15.457901 master-0 kubenswrapper[30420]: I0318 10:28:15.457781 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-4zmkn_4bbc2122-512f-4056-8572-80126bea4f0c/cert-manager-cainjector/0.log" Mar 18 10:28:15.471303 master-0 kubenswrapper[30420]: I0318 10:28:15.471258 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-29ng5_2792fe14-2599-454d-9b93-0587ac7086bd/cert-manager-webhook/0.log" Mar 18 10:28:15.898574 master-0 kubenswrapper[30420]: I0318 10:28:15.898523 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-64gfl_f9b32010-f07f-4e5a-a7a6-10e14dd65d91/nmstate-console-plugin/0.log" Mar 18 10:28:15.918616 master-0 kubenswrapper[30420]: I0318 10:28:15.918569 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-clbx9_49f160ff-7093-4d65-99b5-51ea63e10306/nmstate-handler/0.log" Mar 18 10:28:15.934748 master-0 kubenswrapper[30420]: I0318 10:28:15.934694 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-v4pqg_feb29aaa-6472-498d-9362-9f56312e248a/nmstate-metrics/0.log" Mar 18 10:28:15.942023 master-0 kubenswrapper[30420]: I0318 10:28:15.941977 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-v4pqg_feb29aaa-6472-498d-9362-9f56312e248a/kube-rbac-proxy/0.log" Mar 18 10:28:15.946031 master-0 kubenswrapper[30420]: I0318 10:28:15.945993 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-64gfl_f9b32010-f07f-4e5a-a7a6-10e14dd65d91/nmstate-console-plugin/0.log" Mar 18 10:28:15.955173 master-0 kubenswrapper[30420]: I0318 10:28:15.955126 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-7z65r_9ac1f807-09b8-4fd1-be56-682238c80007/nmstate-operator/0.log" Mar 18 10:28:15.964399 master-0 kubenswrapper[30420]: I0318 10:28:15.964348 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-bfbvm_8d3ab44d-452a-4080-b985-0e24d2d5bf5d/nmstate-webhook/0.log" Mar 18 10:28:15.965381 master-0 kubenswrapper[30420]: I0318 10:28:15.965338 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-clbx9_49f160ff-7093-4d65-99b5-51ea63e10306/nmstate-handler/0.log" Mar 18 10:28:15.982806 master-0 kubenswrapper[30420]: I0318 10:28:15.982762 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-v4pqg_feb29aaa-6472-498d-9362-9f56312e248a/nmstate-metrics/0.log" Mar 18 10:28:15.998154 master-0 kubenswrapper[30420]: I0318 10:28:15.998097 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-v4pqg_feb29aaa-6472-498d-9362-9f56312e248a/kube-rbac-proxy/0.log" Mar 18 10:28:16.023524 master-0 kubenswrapper[30420]: I0318 10:28:16.023448 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-7z65r_9ac1f807-09b8-4fd1-be56-682238c80007/nmstate-operator/0.log" Mar 18 10:28:16.039987 master-0 kubenswrapper[30420]: I0318 10:28:16.039942 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-bfbvm_8d3ab44d-452a-4080-b985-0e24d2d5bf5d/nmstate-webhook/0.log" Mar 18 10:28:16.765480 master-0 kubenswrapper[30420]: I0318 10:28:16.765414 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dqvx5_c72cb4f2-d0f2-4f20-a3b6-bf6ccd17e141/kube-multus-additional-cni-plugins/0.log" Mar 18 10:28:16.796818 master-0 kubenswrapper[30420]: I0318 10:28:16.796770 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-dg6dw_91331360-dc70-45bb-a815-e00664bae6c4/kube-multus-additional-cni-plugins/0.log" Mar 18 10:28:16.817669 master-0 kubenswrapper[30420]: I0318 10:28:16.817558 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-dg6dw_91331360-dc70-45bb-a815-e00664bae6c4/egress-router-binary-copy/0.log" Mar 18 10:28:16.831781 master-0 kubenswrapper[30420]: I0318 10:28:16.831740 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-dg6dw_91331360-dc70-45bb-a815-e00664bae6c4/cni-plugins/0.log" Mar 18 10:28:16.846361 master-0 kubenswrapper[30420]: I0318 10:28:16.846319 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-dg6dw_91331360-dc70-45bb-a815-e00664bae6c4/bond-cni-plugin/0.log" Mar 18 10:28:16.860546 master-0 kubenswrapper[30420]: I0318 10:28:16.860486 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-dg6dw_91331360-dc70-45bb-a815-e00664bae6c4/routeoverride-cni/0.log" Mar 18 10:28:16.872306 master-0 kubenswrapper[30420]: I0318 10:28:16.872258 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-dg6dw_91331360-dc70-45bb-a815-e00664bae6c4/whereabouts-cni-bincopy/0.log" Mar 18 10:28:16.884303 master-0 kubenswrapper[30420]: I0318 10:28:16.884240 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-dg6dw_91331360-dc70-45bb-a815-e00664bae6c4/whereabouts-cni/0.log" Mar 18 10:28:16.903949 master-0 kubenswrapper[30420]: I0318 10:28:16.903896 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-ssnvh_f875878f-3588-42f1-9488-750d9f4582f8/multus-admission-controller/0.log" Mar 18 10:28:16.931847 master-0 kubenswrapper[30420]: I0318 10:28:16.931781 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-ssnvh_f875878f-3588-42f1-9488-750d9f4582f8/kube-rbac-proxy/0.log" Mar 18 10:28:16.994921 master-0 kubenswrapper[30420]: I0318 10:28:16.994860 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xgdvw_03de1ea6-da57-4e13-8e5a-d5e10a9f9957/kube-multus/0.log" Mar 18 10:28:17.055942 master-0 kubenswrapper[30420]: I0318 10:28:17.055893 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xgdvw_03de1ea6-da57-4e13-8e5a-d5e10a9f9957/kube-multus/1.log" Mar 18 10:28:17.080737 master-0 kubenswrapper[30420]: I0318 10:28:17.080691 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tbxt4_0442ec6c-5973-40a5-a0c3-dc02de46d343/network-metrics-daemon/0.log" Mar 18 10:28:17.091699 master-0 kubenswrapper[30420]: I0318 10:28:17.091656 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tbxt4_0442ec6c-5973-40a5-a0c3-dc02de46d343/kube-rbac-proxy/0.log" Mar 18 10:28:17.603091 master-0 kubenswrapper[30420]: I0318 10:28:17.603036 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_lvms-operator-56f66bc554-5vdd5_5bca57b4-b8b9-4298-9f45-1ad27ad0e85f/manager/0.log" Mar 18 10:28:17.623778 master-0 kubenswrapper[30420]: I0318 10:28:17.623714 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-ccdmj_b261d945-73cc-4fe4-acf6-55c7d01fbad0/vg-manager/1.log" Mar 18 10:28:17.680902 master-0 kubenswrapper[30420]: I0318 10:28:17.624751 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-ccdmj_b261d945-73cc-4fe4-acf6-55c7d01fbad0/vg-manager/0.log" Mar 18 10:28:18.143508 master-0 kubenswrapper[30420]: I0318 10:28:18.143442 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_a4d7edd6-7975-468e-adea-138d92ed1be1/installer/0.log" Mar 18 10:28:18.167817 master-0 kubenswrapper[30420]: I0318 10:28:18.167770 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_346d6f79-a9bd-4097-abe7-b68830aa2e84/installer/0.log" Mar 18 10:28:18.186540 master-0 kubenswrapper[30420]: I0318 10:28:18.186477 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_449dc8b3-72b7-4be5-b5ab-ed4d632f52b2/installer/0.log" Mar 18 10:28:18.203725 master-0 kubenswrapper[30420]: I0318 10:28:18.203669 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-retry-1-master-0_a6716938-ca14-4000-b7f1-b60e93e93c0d/installer/0.log" Mar 18 10:28:18.227224 master-0 kubenswrapper[30420]: I0318 10:28:18.227182 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-5-master-0_2be13c7e-ab8c-43a4-ad8e-4ef8fd233348/installer/0.log" Mar 18 10:28:18.263599 master-0 kubenswrapper[30420]: I0318 10:28:18.263545 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_06de8c68c0832ab8f7d68e9aec6f9555/kube-controller-manager/0.log" Mar 18 10:28:18.406093 master-0 kubenswrapper[30420]: I0318 10:28:18.405984 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_06de8c68c0832ab8f7d68e9aec6f9555/kube-controller-manager/1.log" Mar 18 10:28:18.456161 master-0 kubenswrapper[30420]: I0318 10:28:18.456110 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_06de8c68c0832ab8f7d68e9aec6f9555/cluster-policy-controller/0.log" Mar 18 10:28:18.467807 master-0 kubenswrapper[30420]: I0318 10:28:18.467773 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_06de8c68c0832ab8f7d68e9aec6f9555/kube-controller-manager-cert-syncer/0.log" Mar 18 10:28:18.479151 master-0 kubenswrapper[30420]: I0318 10:28:18.479105 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_06de8c68c0832ab8f7d68e9aec6f9555/kube-controller-manager-recovery-controller/0.log" Mar 18 10:28:19.089705 master-0 kubenswrapper[30420]: I0318 10:28:19.089644 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-pzqqc_0999f781-3299-4cb6-ba76-2a4f4584c685/kube-controller-manager-operator/3.log" Mar 18 10:28:19.091441 master-0 kubenswrapper[30420]: I0318 10:28:19.091410 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-pzqqc_0999f781-3299-4cb6-ba76-2a4f4584c685/kube-controller-manager-operator/2.log" Mar 18 10:28:20.319997 master-0 kubenswrapper[30420]: I0318 10:28:20.319939 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_54a208d1-afe8-49b5-92e0-e27afb4abc80/installer/0.log" Mar 18 10:28:20.335076 master-0 kubenswrapper[30420]: I0318 10:28:20.335020 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_1c62ceda-5e7e-4392-83b9-0d80856e1a96/installer/0.log" Mar 18 10:28:20.351550 master-0 kubenswrapper[30420]: I0318 10:28:20.351502 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-6-master-0_4ea5939e-5f4d-4028-9384-2ec5710ecdc8/installer/0.log" Mar 18 10:28:20.377762 master-0 kubenswrapper[30420]: I0318 10:28:20.377677 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-6-retry-1-master-0_fcf01f63-ed66-4f0d-b2df-97c77bbf8543/installer/0.log" Mar 18 10:28:20.411358 master-0 kubenswrapper[30420]: I0318 10:28:20.411327 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_11a2f93448b9d54da9854663936e2b73/kube-scheduler/0.log" Mar 18 10:28:20.423899 master-0 kubenswrapper[30420]: I0318 10:28:20.423849 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_11a2f93448b9d54da9854663936e2b73/kube-scheduler-cert-syncer/0.log" Mar 18 10:28:20.437072 master-0 kubenswrapper[30420]: I0318 10:28:20.436959 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_11a2f93448b9d54da9854663936e2b73/kube-scheduler-recovery-controller/0.log" Mar 18 10:28:20.450618 master-0 kubenswrapper[30420]: I0318 10:28:20.450516 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_11a2f93448b9d54da9854663936e2b73/wait-for-host-port/0.log" Mar 18 10:28:20.468673 master-0 kubenswrapper[30420]: I0318 10:28:20.468615 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_revision-pruner-6-master-0_bb2f55a1-1af1-49b1-9dbc-d30063d6935e/pruner/0.log" Mar 18 10:28:20.987052 master-0 kubenswrapper[30420]: I0318 10:28:20.986961 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vj8tt_3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/kube-scheduler-operator-container/1.log" Mar 18 10:28:21.011623 master-0 kubenswrapper[30420]: I0318 10:28:21.011571 30420 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vj8tt_3414fa1f-e4ee-4c7e-81cd-1fbd86486cd6/kube-scheduler-operator-container/2.log"